00:00:00.001 Started by upstream project "autotest-per-patch" build number 132754 00:00:00.001 originally caused by: 00:00:00.001 Started by user sys_sgci 00:00:00.145 Checking out git https://review.spdk.io/gerrit/a/build_pool/jenkins_build_pool into /var/jenkins_home/workspace/nvme-vg-autotest_script/33b20b30f0a51e6b52980845e0f6aa336787973ad45e341fbbf98d1b65b265d4 to read jbp/jenkins/jjb-config/jobs/autotest-downstream/autotest-vg.groovy 00:00:00.146 The recommended git tool is: git 00:00:00.146 using credential 00000000-0000-0000-0000-000000000002 00:00:00.148 > git rev-parse --resolve-git-dir /var/jenkins_home/workspace/nvme-vg-autotest_script/33b20b30f0a51e6b52980845e0f6aa336787973ad45e341fbbf98d1b65b265d4/jbp/.git # timeout=10 00:00:00.205 Fetching changes from the remote Git repository 00:00:00.210 > git config remote.origin.url https://review.spdk.io/gerrit/a/build_pool/jenkins_build_pool # timeout=10 00:00:00.272 Using shallow fetch with depth 1 00:00:00.272 Fetching upstream changes from https://review.spdk.io/gerrit/a/build_pool/jenkins_build_pool 00:00:00.272 > git --version # timeout=10 00:00:00.319 > git --version # 'git version 2.39.2' 00:00:00.320 using GIT_ASKPASS to set credentials SPDKCI HTTPS Credentials 00:00:00.342 Setting http proxy: proxy-dmz.intel.com:911 00:00:00.342 > git fetch --tags --force --progress --depth=1 -- https://review.spdk.io/gerrit/a/build_pool/jenkins_build_pool refs/heads/master # timeout=5 00:00:06.203 > git rev-parse origin/FETCH_HEAD^{commit} # timeout=10 00:00:06.214 > git rev-parse FETCH_HEAD^{commit} # timeout=10 00:00:06.226 Checking out Revision db4637e8b949f278f369ec13f70585206ccd9507 (FETCH_HEAD) 00:00:06.226 > git config core.sparsecheckout # timeout=10 00:00:06.237 > git read-tree -mu HEAD # timeout=10 00:00:06.253 > git checkout -f db4637e8b949f278f369ec13f70585206ccd9507 # timeout=5 00:00:06.273 Commit message: "jenkins/jjb-config: Add missing SPDK_TEST_NVME_INTERRUPT flag" 00:00:06.273 > git rev-list --no-walk db4637e8b949f278f369ec13f70585206ccd9507 # timeout=10 00:00:06.365 [Pipeline] Start of Pipeline 00:00:06.379 [Pipeline] library 00:00:06.381 Loading library shm_lib@master 00:00:06.381 Library shm_lib@master is cached. Copying from home. 00:00:06.398 [Pipeline] node 00:00:06.407 Running on VM-host-WFP1 in /var/jenkins/workspace/nvme-vg-autotest_2 00:00:06.408 [Pipeline] { 00:00:06.418 [Pipeline] catchError 00:00:06.419 [Pipeline] { 00:00:06.427 [Pipeline] wrap 00:00:06.434 [Pipeline] { 00:00:06.440 [Pipeline] stage 00:00:06.441 [Pipeline] { (Prologue) 00:00:06.454 [Pipeline] echo 00:00:06.455 Node: VM-host-WFP1 00:00:06.459 [Pipeline] cleanWs 00:00:06.468 [WS-CLEANUP] Deleting project workspace... 00:00:06.468 [WS-CLEANUP] Deferred wipeout is used... 00:00:06.474 [WS-CLEANUP] done 00:00:06.666 [Pipeline] setCustomBuildProperty 00:00:06.739 [Pipeline] httpRequest 00:00:07.508 [Pipeline] echo 00:00:07.509 Sorcerer 10.211.164.101 is alive 00:00:07.516 [Pipeline] retry 00:00:07.518 [Pipeline] { 00:00:07.530 [Pipeline] httpRequest 00:00:07.535 HttpMethod: GET 00:00:07.536 URL: http://10.211.164.101/packages/jbp_db4637e8b949f278f369ec13f70585206ccd9507.tar.gz 00:00:07.536 Sending request to url: http://10.211.164.101/packages/jbp_db4637e8b949f278f369ec13f70585206ccd9507.tar.gz 00:00:07.548 Response Code: HTTP/1.1 200 OK 00:00:07.549 Success: Status code 200 is in the accepted range: 200,404 00:00:07.549 Saving response body to /var/jenkins/workspace/nvme-vg-autotest_2/jbp_db4637e8b949f278f369ec13f70585206ccd9507.tar.gz 00:00:25.938 [Pipeline] } 00:00:25.954 [Pipeline] // retry 00:00:25.960 [Pipeline] sh 00:00:26.238 + tar --no-same-owner -xf jbp_db4637e8b949f278f369ec13f70585206ccd9507.tar.gz 00:00:26.255 [Pipeline] httpRequest 00:00:26.681 [Pipeline] echo 00:00:26.683 Sorcerer 10.211.164.101 is alive 00:00:26.690 [Pipeline] retry 00:00:26.691 [Pipeline] { 00:00:26.702 [Pipeline] httpRequest 00:00:26.706 HttpMethod: GET 00:00:26.707 URL: http://10.211.164.101/packages/spdk_1148849d6c67ed21b6e0281b5f8326cf0759ca3e.tar.gz 00:00:26.707 Sending request to url: http://10.211.164.101/packages/spdk_1148849d6c67ed21b6e0281b5f8326cf0759ca3e.tar.gz 00:00:26.729 Response Code: HTTP/1.1 200 OK 00:00:26.729 Success: Status code 200 is in the accepted range: 200,404 00:00:26.730 Saving response body to /var/jenkins/workspace/nvme-vg-autotest_2/spdk_1148849d6c67ed21b6e0281b5f8326cf0759ca3e.tar.gz 00:03:20.987 [Pipeline] } 00:03:21.007 [Pipeline] // retry 00:03:21.014 [Pipeline] sh 00:03:21.296 + tar --no-same-owner -xf spdk_1148849d6c67ed21b6e0281b5f8326cf0759ca3e.tar.gz 00:03:23.846 [Pipeline] sh 00:03:24.125 + git -C spdk log --oneline -n5 00:03:24.125 1148849d6 nvme/rdma: Register UMR per IO request 00:03:24.125 0787c2b4e accel/mlx5: Support mkey registration 00:03:24.125 0ea9ac02f accel/mlx5: Create pool of UMRs 00:03:24.125 60adca7e1 lib/mlx5: API to configure UMR 00:03:24.125 c2471e450 nvmf: Clean unassociated_qpairs on connect error 00:03:24.143 [Pipeline] writeFile 00:03:24.157 [Pipeline] sh 00:03:24.440 + jbp/jenkins/jjb-config/jobs/scripts/autorun_quirks.sh 00:03:24.453 [Pipeline] sh 00:03:24.733 + cat autorun-spdk.conf 00:03:24.734 SPDK_RUN_FUNCTIONAL_TEST=1 00:03:24.734 SPDK_TEST_NVME=1 00:03:24.734 SPDK_TEST_FTL=1 00:03:24.734 SPDK_TEST_ISAL=1 00:03:24.734 SPDK_RUN_ASAN=1 00:03:24.734 SPDK_RUN_UBSAN=1 00:03:24.734 SPDK_TEST_XNVME=1 00:03:24.734 SPDK_TEST_NVME_FDP=1 00:03:24.734 SPDK_ABI_DIR=/home/vagrant/spdk_repo/spdk-abi 00:03:24.740 RUN_NIGHTLY=0 00:03:24.741 [Pipeline] } 00:03:24.752 [Pipeline] // stage 00:03:24.765 [Pipeline] stage 00:03:24.766 [Pipeline] { (Run VM) 00:03:24.774 [Pipeline] sh 00:03:25.051 + jbp/jenkins/jjb-config/jobs/scripts/prepare_nvme.sh 00:03:25.051 + echo 'Start stage prepare_nvme.sh' 00:03:25.051 Start stage prepare_nvme.sh 00:03:25.051 + [[ -n 3 ]] 00:03:25.051 + disk_prefix=ex3 00:03:25.051 + [[ -n /var/jenkins/workspace/nvme-vg-autotest_2 ]] 00:03:25.051 + [[ -e /var/jenkins/workspace/nvme-vg-autotest_2/autorun-spdk.conf ]] 00:03:25.051 + source /var/jenkins/workspace/nvme-vg-autotest_2/autorun-spdk.conf 00:03:25.051 ++ SPDK_RUN_FUNCTIONAL_TEST=1 00:03:25.051 ++ SPDK_TEST_NVME=1 00:03:25.051 ++ SPDK_TEST_FTL=1 00:03:25.051 ++ SPDK_TEST_ISAL=1 00:03:25.051 ++ SPDK_RUN_ASAN=1 00:03:25.051 ++ SPDK_RUN_UBSAN=1 00:03:25.051 ++ SPDK_TEST_XNVME=1 00:03:25.051 ++ SPDK_TEST_NVME_FDP=1 00:03:25.051 ++ SPDK_ABI_DIR=/home/vagrant/spdk_repo/spdk-abi 00:03:25.051 ++ RUN_NIGHTLY=0 00:03:25.051 + cd /var/jenkins/workspace/nvme-vg-autotest_2 00:03:25.051 + nvme_files=() 00:03:25.051 + declare -A nvme_files 00:03:25.051 + backend_dir=/var/lib/libvirt/images/backends 00:03:25.051 + nvme_files['nvme.img']=5G 00:03:25.051 + nvme_files['nvme-cmb.img']=5G 00:03:25.051 + nvme_files['nvme-multi0.img']=4G 00:03:25.051 + nvme_files['nvme-multi1.img']=4G 00:03:25.051 + nvme_files['nvme-multi2.img']=4G 00:03:25.051 + nvme_files['nvme-openstack.img']=8G 00:03:25.051 + nvme_files['nvme-zns.img']=5G 00:03:25.051 + (( SPDK_TEST_NVME_PMR == 1 )) 00:03:25.051 + (( SPDK_TEST_FTL == 1 )) 00:03:25.051 + nvme_files["nvme-ftl.img"]=6G 00:03:25.051 + (( SPDK_TEST_NVME_FDP == 1 )) 00:03:25.051 + nvme_files["nvme-fdp.img"]=1G 00:03:25.051 + [[ ! -d /var/lib/libvirt/images/backends ]] 00:03:25.051 + for nvme in "${!nvme_files[@]}" 00:03:25.051 + sudo -E spdk/scripts/vagrant/create_nvme_img.sh -n /var/lib/libvirt/images/backends/ex3-nvme-multi2.img -s 4G 00:03:25.051 Formatting '/var/lib/libvirt/images/backends/ex3-nvme-multi2.img', fmt=raw size=4294967296 preallocation=falloc 00:03:25.051 + for nvme in "${!nvme_files[@]}" 00:03:25.051 + sudo -E spdk/scripts/vagrant/create_nvme_img.sh -n /var/lib/libvirt/images/backends/ex3-nvme-ftl.img -s 6G 00:03:25.051 Formatting '/var/lib/libvirt/images/backends/ex3-nvme-ftl.img', fmt=raw size=6442450944 preallocation=falloc 00:03:25.051 + for nvme in "${!nvme_files[@]}" 00:03:25.051 + sudo -E spdk/scripts/vagrant/create_nvme_img.sh -n /var/lib/libvirt/images/backends/ex3-nvme-cmb.img -s 5G 00:03:25.051 Formatting '/var/lib/libvirt/images/backends/ex3-nvme-cmb.img', fmt=raw size=5368709120 preallocation=falloc 00:03:25.051 + for nvme in "${!nvme_files[@]}" 00:03:25.051 + sudo -E spdk/scripts/vagrant/create_nvme_img.sh -n /var/lib/libvirt/images/backends/ex3-nvme-openstack.img -s 8G 00:03:25.310 Formatting '/var/lib/libvirt/images/backends/ex3-nvme-openstack.img', fmt=raw size=8589934592 preallocation=falloc 00:03:25.310 + for nvme in "${!nvme_files[@]}" 00:03:25.310 + sudo -E spdk/scripts/vagrant/create_nvme_img.sh -n /var/lib/libvirt/images/backends/ex3-nvme-zns.img -s 5G 00:03:25.310 Formatting '/var/lib/libvirt/images/backends/ex3-nvme-zns.img', fmt=raw size=5368709120 preallocation=falloc 00:03:25.310 + for nvme in "${!nvme_files[@]}" 00:03:25.310 + sudo -E spdk/scripts/vagrant/create_nvme_img.sh -n /var/lib/libvirt/images/backends/ex3-nvme-multi1.img -s 4G 00:03:25.310 Formatting '/var/lib/libvirt/images/backends/ex3-nvme-multi1.img', fmt=raw size=4294967296 preallocation=falloc 00:03:25.310 + for nvme in "${!nvme_files[@]}" 00:03:25.310 + sudo -E spdk/scripts/vagrant/create_nvme_img.sh -n /var/lib/libvirt/images/backends/ex3-nvme-multi0.img -s 4G 00:03:25.310 Formatting '/var/lib/libvirt/images/backends/ex3-nvme-multi0.img', fmt=raw size=4294967296 preallocation=falloc 00:03:25.310 + for nvme in "${!nvme_files[@]}" 00:03:25.310 + sudo -E spdk/scripts/vagrant/create_nvme_img.sh -n /var/lib/libvirt/images/backends/ex3-nvme-fdp.img -s 1G 00:03:25.310 Formatting '/var/lib/libvirt/images/backends/ex3-nvme-fdp.img', fmt=raw size=1073741824 preallocation=falloc 00:03:25.310 + for nvme in "${!nvme_files[@]}" 00:03:25.310 + sudo -E spdk/scripts/vagrant/create_nvme_img.sh -n /var/lib/libvirt/images/backends/ex3-nvme.img -s 5G 00:03:25.570 Formatting '/var/lib/libvirt/images/backends/ex3-nvme.img', fmt=raw size=5368709120 preallocation=falloc 00:03:25.570 ++ sudo grep -rl ex3-nvme.img /etc/libvirt/qemu 00:03:25.570 + echo 'End stage prepare_nvme.sh' 00:03:25.570 End stage prepare_nvme.sh 00:03:25.582 [Pipeline] sh 00:03:25.866 + DISTRO=fedora39 CPUS=10 RAM=12288 jbp/jenkins/jjb-config/jobs/scripts/vagrant_create_vm.sh 00:03:25.866 Setup: -n 10 -s 12288 -x http://proxy-dmz.intel.com:911 -p libvirt --qemu-emulator=/usr/local/qemu/vanilla-v8.0.0/bin/qemu-system-x86_64 --nic-model=e1000 -b /var/lib/libvirt/images/backends/ex3-nvme-ftl.img,nvme,,,,,true -b /var/lib/libvirt/images/backends/ex3-nvme.img -b /var/lib/libvirt/images/backends/ex3-nvme-multi0.img,nvme,/var/lib/libvirt/images/backends/ex3-nvme-multi1.img:/var/lib/libvirt/images/backends/ex3-nvme-multi2.img -b /var/lib/libvirt/images/backends/ex3-nvme-fdp.img,nvme,,,,,,on -H -a -v -f fedora39 00:03:25.866 00:03:25.866 DIR=/var/jenkins/workspace/nvme-vg-autotest_2/spdk/scripts/vagrant 00:03:25.866 SPDK_DIR=/var/jenkins/workspace/nvme-vg-autotest_2/spdk 00:03:25.866 VAGRANT_TARGET=/var/jenkins/workspace/nvme-vg-autotest_2 00:03:25.866 HELP=0 00:03:25.866 DRY_RUN=0 00:03:25.866 NVME_FILE=/var/lib/libvirt/images/backends/ex3-nvme-ftl.img,/var/lib/libvirt/images/backends/ex3-nvme.img,/var/lib/libvirt/images/backends/ex3-nvme-multi0.img,/var/lib/libvirt/images/backends/ex3-nvme-fdp.img, 00:03:25.866 NVME_DISKS_TYPE=nvme,nvme,nvme,nvme, 00:03:25.866 NVME_AUTO_CREATE=0 00:03:25.866 NVME_DISKS_NAMESPACES=,,/var/lib/libvirt/images/backends/ex3-nvme-multi1.img:/var/lib/libvirt/images/backends/ex3-nvme-multi2.img,, 00:03:25.866 NVME_CMB=,,,, 00:03:25.866 NVME_PMR=,,,, 00:03:25.866 NVME_ZNS=,,,, 00:03:25.866 NVME_MS=true,,,, 00:03:25.867 NVME_FDP=,,,on, 00:03:25.867 SPDK_VAGRANT_DISTRO=fedora39 00:03:25.867 SPDK_VAGRANT_VMCPU=10 00:03:25.867 SPDK_VAGRANT_VMRAM=12288 00:03:25.867 SPDK_VAGRANT_PROVIDER=libvirt 00:03:25.867 SPDK_VAGRANT_HTTP_PROXY=http://proxy-dmz.intel.com:911 00:03:25.867 SPDK_QEMU_EMULATOR=/usr/local/qemu/vanilla-v8.0.0/bin/qemu-system-x86_64 00:03:25.867 SPDK_OPENSTACK_NETWORK=0 00:03:25.867 VAGRANT_PACKAGE_BOX=0 00:03:25.867 VAGRANTFILE=/var/jenkins/workspace/nvme-vg-autotest_2/spdk/scripts/vagrant/Vagrantfile 00:03:25.867 FORCE_DISTRO=true 00:03:25.867 VAGRANT_BOX_VERSION= 00:03:25.867 EXTRA_VAGRANTFILES= 00:03:25.867 NIC_MODEL=e1000 00:03:25.867 00:03:25.867 mkdir: created directory '/var/jenkins/workspace/nvme-vg-autotest_2/fedora39-libvirt' 00:03:25.867 /var/jenkins/workspace/nvme-vg-autotest_2/fedora39-libvirt /var/jenkins/workspace/nvme-vg-autotest_2 00:03:29.152 Bringing machine 'default' up with 'libvirt' provider... 00:03:30.127 ==> default: Creating image (snapshot of base box volume). 00:03:30.406 ==> default: Creating domain with the following settings... 00:03:30.407 ==> default: -- Name: fedora39-39-1.5-1721788873-2326_default_1733508099_ec8ae1ac0ff67e5d3b56 00:03:30.407 ==> default: -- Domain type: kvm 00:03:30.407 ==> default: -- Cpus: 10 00:03:30.407 ==> default: -- Feature: acpi 00:03:30.407 ==> default: -- Feature: apic 00:03:30.407 ==> default: -- Feature: pae 00:03:30.407 ==> default: -- Memory: 12288M 00:03:30.407 ==> default: -- Memory Backing: hugepages: 00:03:30.407 ==> default: -- Management MAC: 00:03:30.407 ==> default: -- Loader: 00:03:30.407 ==> default: -- Nvram: 00:03:30.407 ==> default: -- Base box: spdk/fedora39 00:03:30.407 ==> default: -- Storage pool: default 00:03:30.407 ==> default: -- Image: /var/lib/libvirt/images/fedora39-39-1.5-1721788873-2326_default_1733508099_ec8ae1ac0ff67e5d3b56.img (20G) 00:03:30.407 ==> default: -- Volume Cache: default 00:03:30.407 ==> default: -- Kernel: 00:03:30.407 ==> default: -- Initrd: 00:03:30.407 ==> default: -- Graphics Type: vnc 00:03:30.407 ==> default: -- Graphics Port: -1 00:03:30.407 ==> default: -- Graphics IP: 127.0.0.1 00:03:30.407 ==> default: -- Graphics Password: Not defined 00:03:30.407 ==> default: -- Video Type: cirrus 00:03:30.407 ==> default: -- Video VRAM: 9216 00:03:30.407 ==> default: -- Sound Type: 00:03:30.407 ==> default: -- Keymap: en-us 00:03:30.407 ==> default: -- TPM Path: 00:03:30.407 ==> default: -- INPUT: type=mouse, bus=ps2 00:03:30.407 ==> default: -- Command line args: 00:03:30.407 ==> default: -> value=-device, 00:03:30.407 ==> default: -> value=nvme,id=nvme-0,serial=12340,addr=0x10, 00:03:30.407 ==> default: -> value=-drive, 00:03:30.407 ==> default: -> value=format=raw,file=/var/lib/libvirt/images/backends/ex3-nvme-ftl.img,if=none,id=nvme-0-drive0, 00:03:30.407 ==> default: -> value=-device, 00:03:30.407 ==> default: -> value=nvme-ns,drive=nvme-0-drive0,bus=nvme-0,nsid=1,zoned=false,logical_block_size=4096,physical_block_size=4096,ms=64, 00:03:30.407 ==> default: -> value=-device, 00:03:30.407 ==> default: -> value=nvme,id=nvme-1,serial=12341,addr=0x11, 00:03:30.407 ==> default: -> value=-drive, 00:03:30.407 ==> default: -> value=format=raw,file=/var/lib/libvirt/images/backends/ex3-nvme.img,if=none,id=nvme-1-drive0, 00:03:30.407 ==> default: -> value=-device, 00:03:30.407 ==> default: -> value=nvme-ns,drive=nvme-1-drive0,bus=nvme-1,nsid=1,zoned=false,logical_block_size=4096,physical_block_size=4096, 00:03:30.407 ==> default: -> value=-device, 00:03:30.407 ==> default: -> value=nvme,id=nvme-2,serial=12342,addr=0x12, 00:03:30.407 ==> default: -> value=-drive, 00:03:30.407 ==> default: -> value=format=raw,file=/var/lib/libvirt/images/backends/ex3-nvme-multi0.img,if=none,id=nvme-2-drive0, 00:03:30.407 ==> default: -> value=-device, 00:03:30.407 ==> default: -> value=nvme-ns,drive=nvme-2-drive0,bus=nvme-2,nsid=1,zoned=false,logical_block_size=4096,physical_block_size=4096, 00:03:30.407 ==> default: -> value=-drive, 00:03:30.407 ==> default: -> value=format=raw,file=/var/lib/libvirt/images/backends/ex3-nvme-multi1.img,if=none,id=nvme-2-drive1, 00:03:30.407 ==> default: -> value=-device, 00:03:30.407 ==> default: -> value=nvme-ns,drive=nvme-2-drive1,bus=nvme-2,nsid=2,zoned=false,logical_block_size=4096,physical_block_size=4096, 00:03:30.407 ==> default: -> value=-drive, 00:03:30.407 ==> default: -> value=format=raw,file=/var/lib/libvirt/images/backends/ex3-nvme-multi2.img,if=none,id=nvme-2-drive2, 00:03:30.407 ==> default: -> value=-device, 00:03:30.407 ==> default: -> value=nvme-ns,drive=nvme-2-drive2,bus=nvme-2,nsid=3,zoned=false,logical_block_size=4096,physical_block_size=4096, 00:03:30.407 ==> default: -> value=-device, 00:03:30.407 ==> default: -> value=nvme-subsys,id=fdp-subsys3,fdp=on,fdp.runs=96M,fdp.nrg=2,fdp.nruh=8, 00:03:30.407 ==> default: -> value=-device, 00:03:30.407 ==> default: -> value=nvme,id=nvme-3,serial=12343,addr=0x13,subsys=fdp-subsys3, 00:03:30.407 ==> default: -> value=-drive, 00:03:30.407 ==> default: -> value=format=raw,file=/var/lib/libvirt/images/backends/ex3-nvme-fdp.img,if=none,id=nvme-3-drive0, 00:03:30.407 ==> default: -> value=-device, 00:03:30.407 ==> default: -> value=nvme-ns,drive=nvme-3-drive0,bus=nvme-3,nsid=1,zoned=false,logical_block_size=4096,physical_block_size=4096, 00:03:30.976 ==> default: Creating shared folders metadata... 00:03:30.976 ==> default: Starting domain. 00:03:32.881 ==> default: Waiting for domain to get an IP address... 00:03:54.900 ==> default: Waiting for SSH to become available... 00:03:54.900 ==> default: Configuring and enabling network interfaces... 00:03:58.189 default: SSH address: 192.168.121.94:22 00:03:58.189 default: SSH username: vagrant 00:03:58.189 default: SSH auth method: private key 00:04:00.738 ==> default: Rsyncing folder: /mnt/jenkins_nvme/jenkins/workspace/nvme-vg-autotest_2/spdk/ => /home/vagrant/spdk_repo/spdk 00:04:10.715 ==> default: Mounting SSHFS shared folder... 00:04:11.662 ==> default: Mounting folder via SSHFS: /mnt/jenkins_nvme/jenkins/workspace/nvme-vg-autotest_2/fedora39-libvirt/output => /home/vagrant/spdk_repo/output 00:04:11.662 ==> default: Checking Mount.. 00:04:13.565 ==> default: Folder Successfully Mounted! 00:04:13.565 ==> default: Running provisioner: file... 00:04:14.503 default: ~/.gitconfig => .gitconfig 00:04:15.069 00:04:15.069 SUCCESS! 00:04:15.069 00:04:15.069 cd to /var/jenkins/workspace/nvme-vg-autotest_2/fedora39-libvirt and type "vagrant ssh" to use. 00:04:15.069 Use vagrant "suspend" and vagrant "resume" to stop and start. 00:04:15.069 Use vagrant "destroy" followed by "rm -rf /var/jenkins/workspace/nvme-vg-autotest_2/fedora39-libvirt" to destroy all trace of vm. 00:04:15.069 00:04:15.077 [Pipeline] } 00:04:15.094 [Pipeline] // stage 00:04:15.105 [Pipeline] dir 00:04:15.106 Running in /var/jenkins/workspace/nvme-vg-autotest_2/fedora39-libvirt 00:04:15.107 [Pipeline] { 00:04:15.122 [Pipeline] catchError 00:04:15.124 [Pipeline] { 00:04:15.137 [Pipeline] sh 00:04:15.417 + vagrant ssh-config --host vagrant 00:04:15.417 + sed -ne /^Host/,$p 00:04:15.417 + tee ssh_conf 00:04:18.712 Host vagrant 00:04:18.712 HostName 192.168.121.94 00:04:18.712 User vagrant 00:04:18.712 Port 22 00:04:18.712 UserKnownHostsFile /dev/null 00:04:18.712 StrictHostKeyChecking no 00:04:18.712 PasswordAuthentication no 00:04:18.712 IdentityFile /var/lib/libvirt/images/.vagrant.d/boxes/spdk-VAGRANTSLASH-fedora39/39-1.5-1721788873-2326/libvirt/fedora39 00:04:18.712 IdentitiesOnly yes 00:04:18.712 LogLevel FATAL 00:04:18.712 ForwardAgent yes 00:04:18.712 ForwardX11 yes 00:04:18.712 00:04:18.724 [Pipeline] withEnv 00:04:18.727 [Pipeline] { 00:04:18.740 [Pipeline] sh 00:04:19.017 + /usr/local/bin/ssh -t -F ssh_conf vagrant@vagrant #!/bin/bash 00:04:19.017 source /etc/os-release 00:04:19.017 [[ -e /image.version ]] && img=$(< /image.version) 00:04:19.017 # Minimal, systemd-like check. 00:04:19.017 if [[ -e /.dockerenv ]]; then 00:04:19.017 # Clear garbage from the node's name: 00:04:19.017 # agt-er_autotest_547-896 -> autotest_547-896 00:04:19.017 # $HOSTNAME is the actual container id 00:04:19.017 agent=$HOSTNAME@${DOCKER_SWARM_PLUGIN_JENKINS_AGENT_NAME#*_} 00:04:19.017 if grep -q "/etc/hostname" /proc/self/mountinfo; then 00:04:19.017 # We can assume this is a mount from a host where container is running, 00:04:19.017 # so fetch its hostname to easily identify the target swarm worker. 00:04:19.017 container="$(< /etc/hostname) ($agent)" 00:04:19.017 else 00:04:19.017 # Fallback 00:04:19.017 container=$agent 00:04:19.017 fi 00:04:19.017 fi 00:04:19.017 echo "${NAME} ${VERSION_ID}|$(uname -r)|${img:-N/A}|${container:-N/A}" 00:04:19.017 00:04:19.284 [Pipeline] } 00:04:19.301 [Pipeline] // withEnv 00:04:19.310 [Pipeline] setCustomBuildProperty 00:04:19.324 [Pipeline] stage 00:04:19.326 [Pipeline] { (Tests) 00:04:19.342 [Pipeline] sh 00:04:19.621 + scp -F ssh_conf -r /var/jenkins/workspace/nvme-vg-autotest_2/jbp/jenkins/jjb-config/jobs/scripts/autoruner.sh vagrant@vagrant:./ 00:04:19.891 [Pipeline] sh 00:04:20.173 + scp -F ssh_conf -r /var/jenkins/workspace/nvme-vg-autotest_2/jbp/jenkins/jjb-config/jobs/scripts/pkgdep-autoruner.sh vagrant@vagrant:./ 00:04:20.443 [Pipeline] timeout 00:04:20.443 Timeout set to expire in 50 min 00:04:20.445 [Pipeline] { 00:04:20.461 [Pipeline] sh 00:04:20.737 + /usr/local/bin/ssh -t -F ssh_conf vagrant@vagrant git -C spdk_repo/spdk reset --hard 00:04:21.302 HEAD is now at 1148849d6 nvme/rdma: Register UMR per IO request 00:04:21.312 [Pipeline] sh 00:04:21.588 + /usr/local/bin/ssh -t -F ssh_conf vagrant@vagrant sudo chown vagrant:vagrant spdk_repo 00:04:21.859 [Pipeline] sh 00:04:22.138 + scp -F ssh_conf -r /var/jenkins/workspace/nvme-vg-autotest_2/autorun-spdk.conf vagrant@vagrant:spdk_repo 00:04:22.411 [Pipeline] sh 00:04:22.696 + /usr/local/bin/ssh -t -F ssh_conf vagrant@vagrant JOB_BASE_NAME=nvme-vg-autotest ./autoruner.sh spdk_repo 00:04:22.971 ++ readlink -f spdk_repo 00:04:22.971 + DIR_ROOT=/home/vagrant/spdk_repo 00:04:22.971 + [[ -n /home/vagrant/spdk_repo ]] 00:04:22.971 + DIR_SPDK=/home/vagrant/spdk_repo/spdk 00:04:22.971 + DIR_OUTPUT=/home/vagrant/spdk_repo/output 00:04:22.971 + [[ -d /home/vagrant/spdk_repo/spdk ]] 00:04:22.971 + [[ ! -d /home/vagrant/spdk_repo/output ]] 00:04:22.971 + [[ -d /home/vagrant/spdk_repo/output ]] 00:04:22.971 + [[ nvme-vg-autotest == pkgdep-* ]] 00:04:22.971 + cd /home/vagrant/spdk_repo 00:04:22.971 + source /etc/os-release 00:04:22.971 ++ NAME='Fedora Linux' 00:04:22.971 ++ VERSION='39 (Cloud Edition)' 00:04:22.971 ++ ID=fedora 00:04:22.971 ++ VERSION_ID=39 00:04:22.971 ++ VERSION_CODENAME= 00:04:22.971 ++ PLATFORM_ID=platform:f39 00:04:22.971 ++ PRETTY_NAME='Fedora Linux 39 (Cloud Edition)' 00:04:22.971 ++ ANSI_COLOR='0;38;2;60;110;180' 00:04:22.971 ++ LOGO=fedora-logo-icon 00:04:22.971 ++ CPE_NAME=cpe:/o:fedoraproject:fedora:39 00:04:22.971 ++ HOME_URL=https://fedoraproject.org/ 00:04:22.971 ++ DOCUMENTATION_URL=https://docs.fedoraproject.org/en-US/fedora/f39/system-administrators-guide/ 00:04:22.971 ++ SUPPORT_URL=https://ask.fedoraproject.org/ 00:04:22.971 ++ BUG_REPORT_URL=https://bugzilla.redhat.com/ 00:04:22.971 ++ REDHAT_BUGZILLA_PRODUCT=Fedora 00:04:22.971 ++ REDHAT_BUGZILLA_PRODUCT_VERSION=39 00:04:22.971 ++ REDHAT_SUPPORT_PRODUCT=Fedora 00:04:22.971 ++ REDHAT_SUPPORT_PRODUCT_VERSION=39 00:04:22.971 ++ SUPPORT_END=2024-11-12 00:04:22.971 ++ VARIANT='Cloud Edition' 00:04:22.971 ++ VARIANT_ID=cloud 00:04:22.971 + uname -a 00:04:22.971 Linux fedora39-cloud-1721788873-2326 6.8.9-200.fc39.x86_64 #1 SMP PREEMPT_DYNAMIC Wed Jul 24 03:04:40 UTC 2024 x86_64 GNU/Linux 00:04:22.971 + sudo /home/vagrant/spdk_repo/spdk/scripts/setup.sh status 00:04:23.537 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:04:23.795 Hugepages 00:04:23.795 node hugesize free / total 00:04:23.795 node0 1048576kB 0 / 0 00:04:23.795 node0 2048kB 0 / 0 00:04:23.795 00:04:23.795 Type BDF Vendor Device NUMA Driver Device Block devices 00:04:23.795 virtio 0000:00:03.0 1af4 1001 unknown virtio-pci - vda 00:04:23.795 NVMe 0000:00:10.0 1b36 0010 unknown nvme nvme0 nvme0n1 00:04:23.795 NVMe 0000:00:11.0 1b36 0010 unknown nvme nvme1 nvme1n1 00:04:23.795 NVMe 0000:00:12.0 1b36 0010 unknown nvme nvme2 nvme2n1 nvme2n2 nvme2n3 00:04:23.795 NVMe 0000:00:13.0 1b36 0010 unknown nvme nvme3 nvme3n1 00:04:23.795 + rm -f /tmp/spdk-ld-path 00:04:23.795 + source autorun-spdk.conf 00:04:23.795 ++ SPDK_RUN_FUNCTIONAL_TEST=1 00:04:23.795 ++ SPDK_TEST_NVME=1 00:04:23.795 ++ SPDK_TEST_FTL=1 00:04:23.795 ++ SPDK_TEST_ISAL=1 00:04:23.795 ++ SPDK_RUN_ASAN=1 00:04:23.795 ++ SPDK_RUN_UBSAN=1 00:04:23.795 ++ SPDK_TEST_XNVME=1 00:04:23.795 ++ SPDK_TEST_NVME_FDP=1 00:04:23.795 ++ SPDK_ABI_DIR=/home/vagrant/spdk_repo/spdk-abi 00:04:23.795 ++ RUN_NIGHTLY=0 00:04:23.795 + (( SPDK_TEST_NVME_CMB == 1 || SPDK_TEST_NVME_PMR == 1 )) 00:04:23.795 + [[ -n '' ]] 00:04:23.795 + sudo git config --global --add safe.directory /home/vagrant/spdk_repo/spdk 00:04:24.053 + for M in /var/spdk/build-*-manifest.txt 00:04:24.053 + [[ -f /var/spdk/build-kernel-manifest.txt ]] 00:04:24.053 + cp /var/spdk/build-kernel-manifest.txt /home/vagrant/spdk_repo/output/ 00:04:24.053 + for M in /var/spdk/build-*-manifest.txt 00:04:24.053 + [[ -f /var/spdk/build-pkg-manifest.txt ]] 00:04:24.053 + cp /var/spdk/build-pkg-manifest.txt /home/vagrant/spdk_repo/output/ 00:04:24.053 + for M in /var/spdk/build-*-manifest.txt 00:04:24.053 + [[ -f /var/spdk/build-repo-manifest.txt ]] 00:04:24.053 + cp /var/spdk/build-repo-manifest.txt /home/vagrant/spdk_repo/output/ 00:04:24.053 ++ uname 00:04:24.053 + [[ Linux == \L\i\n\u\x ]] 00:04:24.053 + sudo dmesg -T 00:04:24.053 + sudo dmesg --clear 00:04:24.053 + dmesg_pid=5253 00:04:24.053 + [[ Fedora Linux == FreeBSD ]] 00:04:24.053 + export UNBIND_ENTIRE_IOMMU_GROUP=yes 00:04:24.053 + UNBIND_ENTIRE_IOMMU_GROUP=yes 00:04:24.053 + [[ -e /var/spdk/dependencies/vhost/spdk_test_image.qcow2 ]] 00:04:24.053 + [[ -x /usr/src/fio-static/fio ]] 00:04:24.053 + sudo dmesg -Tw 00:04:24.053 + export FIO_BIN=/usr/src/fio-static/fio 00:04:24.053 + FIO_BIN=/usr/src/fio-static/fio 00:04:24.053 + [[ '' == \/\q\e\m\u\_\v\f\i\o\/* ]] 00:04:24.053 + [[ ! -v VFIO_QEMU_BIN ]] 00:04:24.053 + [[ -e /usr/local/qemu/vfio-user-latest ]] 00:04:24.053 + export VFIO_QEMU_BIN=/usr/local/qemu/vfio-user-latest/bin/qemu-system-x86_64 00:04:24.053 + VFIO_QEMU_BIN=/usr/local/qemu/vfio-user-latest/bin/qemu-system-x86_64 00:04:24.053 + [[ -e /usr/local/qemu/vanilla-latest ]] 00:04:24.053 + export QEMU_BIN=/usr/local/qemu/vanilla-latest/bin/qemu-system-x86_64 00:04:24.053 + QEMU_BIN=/usr/local/qemu/vanilla-latest/bin/qemu-system-x86_64 00:04:24.053 + spdk/autorun.sh /home/vagrant/spdk_repo/autorun-spdk.conf 00:04:24.053 18:02:34 -- common/autotest_common.sh@1710 -- $ [[ n == y ]] 00:04:24.053 18:02:34 -- spdk/autorun.sh@20 -- $ source /home/vagrant/spdk_repo/autorun-spdk.conf 00:04:24.053 18:02:34 -- spdk_repo/autorun-spdk.conf@1 -- $ SPDK_RUN_FUNCTIONAL_TEST=1 00:04:24.053 18:02:34 -- spdk_repo/autorun-spdk.conf@2 -- $ SPDK_TEST_NVME=1 00:04:24.053 18:02:34 -- spdk_repo/autorun-spdk.conf@3 -- $ SPDK_TEST_FTL=1 00:04:24.053 18:02:34 -- spdk_repo/autorun-spdk.conf@4 -- $ SPDK_TEST_ISAL=1 00:04:24.053 18:02:34 -- spdk_repo/autorun-spdk.conf@5 -- $ SPDK_RUN_ASAN=1 00:04:24.053 18:02:34 -- spdk_repo/autorun-spdk.conf@6 -- $ SPDK_RUN_UBSAN=1 00:04:24.053 18:02:34 -- spdk_repo/autorun-spdk.conf@7 -- $ SPDK_TEST_XNVME=1 00:04:24.053 18:02:34 -- spdk_repo/autorun-spdk.conf@8 -- $ SPDK_TEST_NVME_FDP=1 00:04:24.053 18:02:34 -- spdk_repo/autorun-spdk.conf@9 -- $ SPDK_ABI_DIR=/home/vagrant/spdk_repo/spdk-abi 00:04:24.053 18:02:34 -- spdk_repo/autorun-spdk.conf@10 -- $ RUN_NIGHTLY=0 00:04:24.053 18:02:34 -- spdk/autorun.sh@22 -- $ trap 'timing_finish || exit 1' EXIT 00:04:24.053 18:02:34 -- spdk/autorun.sh@25 -- $ /home/vagrant/spdk_repo/spdk/autobuild.sh /home/vagrant/spdk_repo/autorun-spdk.conf 00:04:24.311 18:02:34 -- common/autotest_common.sh@1710 -- $ [[ n == y ]] 00:04:24.311 18:02:34 -- common/autobuild_common.sh@15 -- $ source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:04:24.311 18:02:34 -- scripts/common.sh@15 -- $ shopt -s extglob 00:04:24.311 18:02:34 -- scripts/common.sh@544 -- $ [[ -e /bin/wpdk_common.sh ]] 00:04:24.311 18:02:34 -- scripts/common.sh@552 -- $ [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:04:24.311 18:02:34 -- scripts/common.sh@553 -- $ source /etc/opt/spdk-pkgdep/paths/export.sh 00:04:24.311 18:02:34 -- paths/export.sh@2 -- $ PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/vagrant/.local/bin:/home/vagrant/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:04:24.311 18:02:34 -- paths/export.sh@3 -- $ PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/vagrant/.local/bin:/home/vagrant/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:04:24.311 18:02:34 -- paths/export.sh@4 -- $ PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/vagrant/.local/bin:/home/vagrant/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:04:24.312 18:02:34 -- paths/export.sh@5 -- $ export PATH 00:04:24.312 18:02:34 -- paths/export.sh@6 -- $ echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/vagrant/.local/bin:/home/vagrant/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:04:24.312 18:02:34 -- common/autobuild_common.sh@492 -- $ out=/home/vagrant/spdk_repo/spdk/../output 00:04:24.312 18:02:34 -- common/autobuild_common.sh@493 -- $ date +%s 00:04:24.312 18:02:34 -- common/autobuild_common.sh@493 -- $ mktemp -dt spdk_1733508154.XXXXXX 00:04:24.312 18:02:34 -- common/autobuild_common.sh@493 -- $ SPDK_WORKSPACE=/tmp/spdk_1733508154.2agX8j 00:04:24.312 18:02:34 -- common/autobuild_common.sh@495 -- $ [[ -n '' ]] 00:04:24.312 18:02:34 -- common/autobuild_common.sh@499 -- $ '[' -n '' ']' 00:04:24.312 18:02:34 -- common/autobuild_common.sh@502 -- $ scanbuild_exclude='--exclude /home/vagrant/spdk_repo/spdk/dpdk/' 00:04:24.312 18:02:34 -- common/autobuild_common.sh@506 -- $ scanbuild_exclude+=' --exclude /home/vagrant/spdk_repo/spdk/xnvme --exclude /tmp' 00:04:24.312 18:02:34 -- common/autobuild_common.sh@508 -- $ scanbuild='scan-build -o /home/vagrant/spdk_repo/spdk/../output/scan-build-tmp --exclude /home/vagrant/spdk_repo/spdk/dpdk/ --exclude /home/vagrant/spdk_repo/spdk/xnvme --exclude /tmp --status-bugs' 00:04:24.312 18:02:34 -- common/autobuild_common.sh@509 -- $ get_config_params 00:04:24.312 18:02:34 -- common/autotest_common.sh@409 -- $ xtrace_disable 00:04:24.312 18:02:34 -- common/autotest_common.sh@10 -- $ set +x 00:04:24.312 18:02:34 -- common/autobuild_common.sh@509 -- $ config_params='--enable-debug --enable-werror --with-rdma --with-idxd --with-fio=/usr/src/fio --with-iscsi-initiator --disable-unit-tests --enable-ubsan --enable-asan --enable-coverage --with-ublk --with-xnvme' 00:04:24.312 18:02:34 -- common/autobuild_common.sh@511 -- $ start_monitor_resources 00:04:24.312 18:02:34 -- pm/common@17 -- $ local monitor 00:04:24.312 18:02:34 -- pm/common@19 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:04:24.312 18:02:34 -- pm/common@19 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:04:24.312 18:02:34 -- pm/common@21 -- $ date +%s 00:04:24.312 18:02:34 -- pm/common@25 -- $ sleep 1 00:04:24.312 18:02:34 -- pm/common@21 -- $ date +%s 00:04:24.312 18:02:34 -- pm/common@21 -- $ /home/vagrant/spdk_repo/spdk/scripts/perf/pm/collect-cpu-load -d /home/vagrant/spdk_repo/spdk/../output/power -l -p monitor.autobuild.sh.1733508154 00:04:24.312 18:02:34 -- pm/common@21 -- $ /home/vagrant/spdk_repo/spdk/scripts/perf/pm/collect-vmstat -d /home/vagrant/spdk_repo/spdk/../output/power -l -p monitor.autobuild.sh.1733508154 00:04:24.312 Redirecting to /home/vagrant/spdk_repo/spdk/../output/power/monitor.autobuild.sh.1733508154_collect-cpu-load.pm.log 00:04:24.312 Redirecting to /home/vagrant/spdk_repo/spdk/../output/power/monitor.autobuild.sh.1733508154_collect-vmstat.pm.log 00:04:25.244 18:02:35 -- common/autobuild_common.sh@512 -- $ trap stop_monitor_resources EXIT 00:04:25.244 18:02:35 -- spdk/autobuild.sh@11 -- $ SPDK_TEST_AUTOBUILD= 00:04:25.244 18:02:35 -- spdk/autobuild.sh@12 -- $ umask 022 00:04:25.244 18:02:35 -- spdk/autobuild.sh@13 -- $ cd /home/vagrant/spdk_repo/spdk 00:04:25.244 18:02:35 -- spdk/autobuild.sh@16 -- $ date -u 00:04:25.244 Fri Dec 6 06:02:35 PM UTC 2024 00:04:25.244 18:02:35 -- spdk/autobuild.sh@17 -- $ git describe --tags 00:04:25.244 v25.01-pre-310-g1148849d6 00:04:25.244 18:02:35 -- spdk/autobuild.sh@19 -- $ '[' 1 -eq 1 ']' 00:04:25.244 18:02:35 -- spdk/autobuild.sh@20 -- $ run_test asan echo 'using asan' 00:04:25.244 18:02:35 -- common/autotest_common.sh@1105 -- $ '[' 3 -le 1 ']' 00:04:25.244 18:02:35 -- common/autotest_common.sh@1111 -- $ xtrace_disable 00:04:25.244 18:02:35 -- common/autotest_common.sh@10 -- $ set +x 00:04:25.244 ************************************ 00:04:25.244 START TEST asan 00:04:25.244 ************************************ 00:04:25.244 using asan 00:04:25.244 18:02:35 asan -- common/autotest_common.sh@1129 -- $ echo 'using asan' 00:04:25.244 00:04:25.244 real 0m0.000s 00:04:25.244 user 0m0.000s 00:04:25.244 sys 0m0.000s 00:04:25.244 18:02:35 asan -- common/autotest_common.sh@1130 -- $ xtrace_disable 00:04:25.244 18:02:35 asan -- common/autotest_common.sh@10 -- $ set +x 00:04:25.244 ************************************ 00:04:25.244 END TEST asan 00:04:25.244 ************************************ 00:04:25.501 18:02:35 -- spdk/autobuild.sh@23 -- $ '[' 1 -eq 1 ']' 00:04:25.501 18:02:35 -- spdk/autobuild.sh@24 -- $ run_test ubsan echo 'using ubsan' 00:04:25.501 18:02:35 -- common/autotest_common.sh@1105 -- $ '[' 3 -le 1 ']' 00:04:25.502 18:02:35 -- common/autotest_common.sh@1111 -- $ xtrace_disable 00:04:25.502 18:02:35 -- common/autotest_common.sh@10 -- $ set +x 00:04:25.502 ************************************ 00:04:25.502 START TEST ubsan 00:04:25.502 ************************************ 00:04:25.502 using ubsan 00:04:25.502 18:02:35 ubsan -- common/autotest_common.sh@1129 -- $ echo 'using ubsan' 00:04:25.502 00:04:25.502 real 0m0.000s 00:04:25.502 user 0m0.000s 00:04:25.502 sys 0m0.000s 00:04:25.502 18:02:35 ubsan -- common/autotest_common.sh@1130 -- $ xtrace_disable 00:04:25.502 18:02:35 ubsan -- common/autotest_common.sh@10 -- $ set +x 00:04:25.502 ************************************ 00:04:25.502 END TEST ubsan 00:04:25.502 ************************************ 00:04:25.502 18:02:35 -- spdk/autobuild.sh@27 -- $ '[' -n '' ']' 00:04:25.502 18:02:35 -- spdk/autobuild.sh@31 -- $ case "$SPDK_TEST_AUTOBUILD" in 00:04:25.502 18:02:35 -- spdk/autobuild.sh@47 -- $ [[ 0 -eq 1 ]] 00:04:25.502 18:02:35 -- spdk/autobuild.sh@51 -- $ [[ 0 -eq 1 ]] 00:04:25.502 18:02:35 -- spdk/autobuild.sh@55 -- $ [[ -n '' ]] 00:04:25.502 18:02:35 -- spdk/autobuild.sh@57 -- $ [[ 0 -eq 1 ]] 00:04:25.502 18:02:35 -- spdk/autobuild.sh@59 -- $ [[ 0 -eq 1 ]] 00:04:25.502 18:02:35 -- spdk/autobuild.sh@62 -- $ [[ 0 -eq 1 ]] 00:04:25.502 18:02:35 -- spdk/autobuild.sh@67 -- $ /home/vagrant/spdk_repo/spdk/configure --enable-debug --enable-werror --with-rdma --with-idxd --with-fio=/usr/src/fio --with-iscsi-initiator --disable-unit-tests --enable-ubsan --enable-asan --enable-coverage --with-ublk --with-xnvme --with-shared 00:04:25.502 Using default SPDK env in /home/vagrant/spdk_repo/spdk/lib/env_dpdk 00:04:25.502 Using default DPDK in /home/vagrant/spdk_repo/spdk/dpdk/build 00:04:26.067 Using 'verbs' RDMA provider 00:04:42.387 Configuring ISA-L (logfile: /home/vagrant/spdk_repo/spdk/.spdk-isal.log)...done. 00:05:00.466 Configuring ISA-L-crypto (logfile: /home/vagrant/spdk_repo/spdk/.spdk-isal-crypto.log)...done. 00:05:00.466 Creating mk/config.mk...done. 00:05:00.466 Creating mk/cc.flags.mk...done. 00:05:00.466 Type 'make' to build. 00:05:00.466 18:03:09 -- spdk/autobuild.sh@70 -- $ run_test make make -j10 00:05:00.466 18:03:09 -- common/autotest_common.sh@1105 -- $ '[' 3 -le 1 ']' 00:05:00.466 18:03:09 -- common/autotest_common.sh@1111 -- $ xtrace_disable 00:05:00.466 18:03:09 -- common/autotest_common.sh@10 -- $ set +x 00:05:00.466 ************************************ 00:05:00.466 START TEST make 00:05:00.466 ************************************ 00:05:00.466 18:03:09 make -- common/autotest_common.sh@1129 -- $ make -j10 00:05:00.466 (cd /home/vagrant/spdk_repo/spdk/xnvme && \ 00:05:00.466 export PKG_CONFIG_PATH=$PKG_CONFIG_PATH:/usr/lib/pkgconfig:/usr/lib64/pkgconfig && \ 00:05:00.466 meson setup builddir \ 00:05:00.466 -Dwith-libaio=enabled \ 00:05:00.466 -Dwith-liburing=enabled \ 00:05:00.466 -Dwith-libvfn=disabled \ 00:05:00.466 -Dwith-spdk=disabled \ 00:05:00.466 -Dexamples=false \ 00:05:00.466 -Dtests=false \ 00:05:00.466 -Dtools=false && \ 00:05:00.466 meson compile -C builddir && \ 00:05:00.466 cd -) 00:05:00.466 make[1]: Nothing to be done for 'all'. 00:05:01.845 The Meson build system 00:05:01.845 Version: 1.5.0 00:05:01.845 Source dir: /home/vagrant/spdk_repo/spdk/xnvme 00:05:01.845 Build dir: /home/vagrant/spdk_repo/spdk/xnvme/builddir 00:05:01.845 Build type: native build 00:05:01.845 Project name: xnvme 00:05:01.845 Project version: 0.7.5 00:05:01.845 C compiler for the host machine: cc (gcc 13.3.1 "cc (GCC) 13.3.1 20240522 (Red Hat 13.3.1-1)") 00:05:01.845 C linker for the host machine: cc ld.bfd 2.40-14 00:05:01.845 Host machine cpu family: x86_64 00:05:01.845 Host machine cpu: x86_64 00:05:01.845 Message: host_machine.system: linux 00:05:01.845 Compiler for C supports arguments -Wno-missing-braces: YES 00:05:01.845 Compiler for C supports arguments -Wno-cast-function-type: YES 00:05:01.845 Compiler for C supports arguments -Wno-strict-aliasing: YES 00:05:01.845 Run-time dependency threads found: YES 00:05:01.845 Has header "setupapi.h" : NO 00:05:01.845 Has header "linux/blkzoned.h" : YES 00:05:01.845 Has header "linux/blkzoned.h" : YES (cached) 00:05:01.845 Has header "libaio.h" : YES 00:05:01.845 Library aio found: YES 00:05:01.845 Found pkg-config: YES (/usr/bin/pkg-config) 1.9.5 00:05:01.845 Run-time dependency liburing found: YES 2.2 00:05:01.845 Dependency libvfn skipped: feature with-libvfn disabled 00:05:01.845 Found CMake: /usr/bin/cmake (3.27.7) 00:05:01.845 Run-time dependency libisal found: NO (tried pkgconfig and cmake) 00:05:01.845 Subproject spdk : skipped: feature with-spdk disabled 00:05:01.845 Run-time dependency appleframeworks found: NO (tried framework) 00:05:01.845 Run-time dependency appleframeworks found: NO (tried framework) 00:05:01.845 Library rt found: YES 00:05:01.845 Checking for function "clock_gettime" with dependency -lrt: YES 00:05:01.845 Configuring xnvme_config.h using configuration 00:05:01.845 Configuring xnvme.spec using configuration 00:05:01.845 Run-time dependency bash-completion found: YES 2.11 00:05:01.845 Message: Bash-completions: /usr/share/bash-completion/completions 00:05:01.845 Program cp found: YES (/usr/bin/cp) 00:05:01.845 Build targets in project: 3 00:05:01.845 00:05:01.845 xnvme 0.7.5 00:05:01.845 00:05:01.845 Subprojects 00:05:01.845 spdk : NO Feature 'with-spdk' disabled 00:05:01.845 00:05:01.845 User defined options 00:05:01.845 examples : false 00:05:01.845 tests : false 00:05:01.845 tools : false 00:05:01.845 with-libaio : enabled 00:05:01.845 with-liburing: enabled 00:05:01.845 with-libvfn : disabled 00:05:01.845 with-spdk : disabled 00:05:01.845 00:05:01.845 Found ninja-1.11.1.git.kitware.jobserver-1 at /usr/local/bin/ninja 00:05:01.846 ninja: Entering directory `/home/vagrant/spdk_repo/spdk/xnvme/builddir' 00:05:02.104 [1/76] Generating toolbox/xnvme-driver-script with a custom command 00:05:02.104 [2/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_fbsd_async.c.o 00:05:02.104 [3/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_fbsd.c.o 00:05:02.104 [4/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_cbi_async_nil.c.o 00:05:02.104 [5/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_cbi_mem_posix.c.o 00:05:02.104 [6/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_cbi_admin_shim.c.o 00:05:02.104 [7/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_cbi_async_emu.c.o 00:05:02.104 [8/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_fbsd_dev.c.o 00:05:02.104 [9/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_cbi_async_posix.c.o 00:05:02.104 [10/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_fbsd_nvme.c.o 00:05:02.104 [11/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_adm.c.o 00:05:02.104 [12/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_cbi_sync_psync.c.o 00:05:02.104 [13/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_linux.c.o 00:05:02.104 [14/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_macos.c.o 00:05:02.104 [15/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_cbi_async_thrpool.c.o 00:05:02.363 [16/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_linux_async_libaio.c.o 00:05:02.363 [17/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be.c.o 00:05:02.363 [18/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_linux_dev.c.o 00:05:02.363 [19/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_macos_admin.c.o 00:05:02.363 [20/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_macos_dev.c.o 00:05:02.363 [21/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_linux_async_ucmd.c.o 00:05:02.363 [22/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_linux_hugepage.c.o 00:05:02.363 [23/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_macos_sync.c.o 00:05:02.363 [24/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_ramdisk.c.o 00:05:02.363 [25/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_spdk.c.o 00:05:02.363 [26/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_linux_block.c.o 00:05:02.363 [27/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_linux_async_liburing.c.o 00:05:02.363 [28/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_linux_nvme.c.o 00:05:02.363 [29/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_spdk_admin.c.o 00:05:02.363 [30/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_spdk_async.c.o 00:05:02.363 [31/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_nosys.c.o 00:05:02.363 [32/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_ramdisk_admin.c.o 00:05:02.363 [33/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_spdk_dev.c.o 00:05:02.363 [34/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_ramdisk_dev.c.o 00:05:02.363 [35/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_spdk_sync.c.o 00:05:02.363 [36/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_spdk_mem.c.o 00:05:02.363 [37/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_ramdisk_sync.c.o 00:05:02.363 [38/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_vfio.c.o 00:05:02.363 [39/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_vfio_async.c.o 00:05:02.363 [40/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_vfio_admin.c.o 00:05:02.363 [41/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_vfio_dev.c.o 00:05:02.364 [42/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_windows_async_ioring.c.o 00:05:02.364 [43/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_vfio_mem.c.o 00:05:02.364 [44/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_windows.c.o 00:05:02.364 [45/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_vfio_sync.c.o 00:05:02.364 [46/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_windows_async_iocp.c.o 00:05:02.364 [47/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_windows_async_iocp_th.c.o 00:05:02.364 [48/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_windows_block.c.o 00:05:02.364 [49/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_windows_dev.c.o 00:05:02.623 [50/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_windows_nvme.c.o 00:05:02.623 [51/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_windows_fs.c.o 00:05:02.623 [52/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_windows_mem.c.o 00:05:02.623 [53/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_libconf_entries.c.o 00:05:02.623 [54/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_cmd.c.o 00:05:02.623 [55/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_file.c.o 00:05:02.623 [56/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_geo.c.o 00:05:02.623 [57/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_req.c.o 00:05:02.623 [58/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_libconf.c.o 00:05:02.623 [59/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_lba.c.o 00:05:02.623 [60/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_ident.c.o 00:05:02.623 [61/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_kvs.c.o 00:05:02.623 [62/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_opts.c.o 00:05:02.623 [63/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_nvm.c.o 00:05:02.623 [64/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_dev.c.o 00:05:02.623 [65/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_queue.c.o 00:05:02.623 [66/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_topology.c.o 00:05:02.623 [67/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_ver.c.o 00:05:02.883 [68/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_buf.c.o 00:05:02.883 [69/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_pi.c.o 00:05:02.883 [70/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_spec_pp.c.o 00:05:02.883 [71/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_crc.c.o 00:05:02.883 [72/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_znd.c.o 00:05:02.883 [73/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_cli.c.o 00:05:03.141 [74/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_spec.c.o 00:05:03.141 [75/76] Linking static target lib/libxnvme.a 00:05:03.141 [76/76] Linking target lib/libxnvme.so.0.7.5 00:05:03.141 INFO: autodetecting backend as ninja 00:05:03.141 INFO: calculating backend command to run: /usr/local/bin/ninja -C /home/vagrant/spdk_repo/spdk/xnvme/builddir 00:05:03.141 /home/vagrant/spdk_repo/spdk/xnvmebuild 00:05:11.375 The Meson build system 00:05:11.375 Version: 1.5.0 00:05:11.375 Source dir: /home/vagrant/spdk_repo/spdk/dpdk 00:05:11.375 Build dir: /home/vagrant/spdk_repo/spdk/dpdk/build-tmp 00:05:11.375 Build type: native build 00:05:11.375 Program cat found: YES (/usr/bin/cat) 00:05:11.375 Project name: DPDK 00:05:11.375 Project version: 24.03.0 00:05:11.375 C compiler for the host machine: cc (gcc 13.3.1 "cc (GCC) 13.3.1 20240522 (Red Hat 13.3.1-1)") 00:05:11.376 C linker for the host machine: cc ld.bfd 2.40-14 00:05:11.376 Host machine cpu family: x86_64 00:05:11.376 Host machine cpu: x86_64 00:05:11.376 Message: ## Building in Developer Mode ## 00:05:11.376 Program pkg-config found: YES (/usr/bin/pkg-config) 00:05:11.376 Program check-symbols.sh found: YES (/home/vagrant/spdk_repo/spdk/dpdk/buildtools/check-symbols.sh) 00:05:11.376 Program options-ibverbs-static.sh found: YES (/home/vagrant/spdk_repo/spdk/dpdk/buildtools/options-ibverbs-static.sh) 00:05:11.376 Program python3 found: YES (/usr/bin/python3) 00:05:11.376 Program cat found: YES (/usr/bin/cat) 00:05:11.376 Compiler for C supports arguments -march=native: YES 00:05:11.376 Checking for size of "void *" : 8 00:05:11.376 Checking for size of "void *" : 8 (cached) 00:05:11.376 Compiler for C supports link arguments -Wl,--undefined-version: YES 00:05:11.376 Library m found: YES 00:05:11.376 Library numa found: YES 00:05:11.376 Has header "numaif.h" : YES 00:05:11.376 Library fdt found: NO 00:05:11.376 Library execinfo found: NO 00:05:11.376 Has header "execinfo.h" : YES 00:05:11.376 Found pkg-config: YES (/usr/bin/pkg-config) 1.9.5 00:05:11.376 Run-time dependency libarchive found: NO (tried pkgconfig) 00:05:11.376 Run-time dependency libbsd found: NO (tried pkgconfig) 00:05:11.376 Run-time dependency jansson found: NO (tried pkgconfig) 00:05:11.376 Run-time dependency openssl found: YES 3.1.1 00:05:11.376 Run-time dependency libpcap found: YES 1.10.4 00:05:11.376 Has header "pcap.h" with dependency libpcap: YES 00:05:11.376 Compiler for C supports arguments -Wcast-qual: YES 00:05:11.376 Compiler for C supports arguments -Wdeprecated: YES 00:05:11.376 Compiler for C supports arguments -Wformat: YES 00:05:11.376 Compiler for C supports arguments -Wformat-nonliteral: NO 00:05:11.376 Compiler for C supports arguments -Wformat-security: NO 00:05:11.376 Compiler for C supports arguments -Wmissing-declarations: YES 00:05:11.376 Compiler for C supports arguments -Wmissing-prototypes: YES 00:05:11.376 Compiler for C supports arguments -Wnested-externs: YES 00:05:11.376 Compiler for C supports arguments -Wold-style-definition: YES 00:05:11.376 Compiler for C supports arguments -Wpointer-arith: YES 00:05:11.376 Compiler for C supports arguments -Wsign-compare: YES 00:05:11.376 Compiler for C supports arguments -Wstrict-prototypes: YES 00:05:11.376 Compiler for C supports arguments -Wundef: YES 00:05:11.376 Compiler for C supports arguments -Wwrite-strings: YES 00:05:11.376 Compiler for C supports arguments -Wno-address-of-packed-member: YES 00:05:11.376 Compiler for C supports arguments -Wno-packed-not-aligned: YES 00:05:11.376 Compiler for C supports arguments -Wno-missing-field-initializers: YES 00:05:11.376 Compiler for C supports arguments -Wno-zero-length-bounds: YES 00:05:11.376 Program objdump found: YES (/usr/bin/objdump) 00:05:11.376 Compiler for C supports arguments -mavx512f: YES 00:05:11.376 Checking if "AVX512 checking" compiles: YES 00:05:11.376 Fetching value of define "__SSE4_2__" : 1 00:05:11.376 Fetching value of define "__AES__" : 1 00:05:11.376 Fetching value of define "__AVX__" : 1 00:05:11.376 Fetching value of define "__AVX2__" : 1 00:05:11.376 Fetching value of define "__AVX512BW__" : 1 00:05:11.376 Fetching value of define "__AVX512CD__" : 1 00:05:11.376 Fetching value of define "__AVX512DQ__" : 1 00:05:11.376 Fetching value of define "__AVX512F__" : 1 00:05:11.376 Fetching value of define "__AVX512VL__" : 1 00:05:11.376 Fetching value of define "__PCLMUL__" : 1 00:05:11.376 Fetching value of define "__RDRND__" : 1 00:05:11.376 Fetching value of define "__RDSEED__" : 1 00:05:11.376 Fetching value of define "__VPCLMULQDQ__" : (undefined) 00:05:11.376 Fetching value of define "__znver1__" : (undefined) 00:05:11.376 Fetching value of define "__znver2__" : (undefined) 00:05:11.376 Fetching value of define "__znver3__" : (undefined) 00:05:11.376 Fetching value of define "__znver4__" : (undefined) 00:05:11.376 Library asan found: YES 00:05:11.376 Compiler for C supports arguments -Wno-format-truncation: YES 00:05:11.376 Message: lib/log: Defining dependency "log" 00:05:11.376 Message: lib/kvargs: Defining dependency "kvargs" 00:05:11.376 Message: lib/telemetry: Defining dependency "telemetry" 00:05:11.376 Library rt found: YES 00:05:11.376 Checking for function "getentropy" : NO 00:05:11.376 Message: lib/eal: Defining dependency "eal" 00:05:11.376 Message: lib/ring: Defining dependency "ring" 00:05:11.376 Message: lib/rcu: Defining dependency "rcu" 00:05:11.376 Message: lib/mempool: Defining dependency "mempool" 00:05:11.376 Message: lib/mbuf: Defining dependency "mbuf" 00:05:11.376 Fetching value of define "__PCLMUL__" : 1 (cached) 00:05:11.376 Fetching value of define "__AVX512F__" : 1 (cached) 00:05:11.376 Fetching value of define "__AVX512BW__" : 1 (cached) 00:05:11.376 Fetching value of define "__AVX512DQ__" : 1 (cached) 00:05:11.376 Fetching value of define "__AVX512VL__" : 1 (cached) 00:05:11.376 Fetching value of define "__VPCLMULQDQ__" : (undefined) (cached) 00:05:11.376 Compiler for C supports arguments -mpclmul: YES 00:05:11.376 Compiler for C supports arguments -maes: YES 00:05:11.376 Compiler for C supports arguments -mavx512f: YES (cached) 00:05:11.376 Compiler for C supports arguments -mavx512bw: YES 00:05:11.376 Compiler for C supports arguments -mavx512dq: YES 00:05:11.376 Compiler for C supports arguments -mavx512vl: YES 00:05:11.376 Compiler for C supports arguments -mvpclmulqdq: YES 00:05:11.376 Compiler for C supports arguments -mavx2: YES 00:05:11.376 Compiler for C supports arguments -mavx: YES 00:05:11.376 Message: lib/net: Defining dependency "net" 00:05:11.376 Message: lib/meter: Defining dependency "meter" 00:05:11.376 Message: lib/ethdev: Defining dependency "ethdev" 00:05:11.376 Message: lib/pci: Defining dependency "pci" 00:05:11.376 Message: lib/cmdline: Defining dependency "cmdline" 00:05:11.376 Message: lib/hash: Defining dependency "hash" 00:05:11.376 Message: lib/timer: Defining dependency "timer" 00:05:11.376 Message: lib/compressdev: Defining dependency "compressdev" 00:05:11.376 Message: lib/cryptodev: Defining dependency "cryptodev" 00:05:11.376 Message: lib/dmadev: Defining dependency "dmadev" 00:05:11.376 Compiler for C supports arguments -Wno-cast-qual: YES 00:05:11.376 Message: lib/power: Defining dependency "power" 00:05:11.376 Message: lib/reorder: Defining dependency "reorder" 00:05:11.376 Message: lib/security: Defining dependency "security" 00:05:11.376 Has header "linux/userfaultfd.h" : YES 00:05:11.376 Has header "linux/vduse.h" : YES 00:05:11.376 Message: lib/vhost: Defining dependency "vhost" 00:05:11.376 Compiler for C supports arguments -Wno-format-truncation: YES (cached) 00:05:11.376 Message: drivers/bus/pci: Defining dependency "bus_pci" 00:05:11.376 Message: drivers/bus/vdev: Defining dependency "bus_vdev" 00:05:11.376 Message: drivers/mempool/ring: Defining dependency "mempool_ring" 00:05:11.376 Message: Disabling raw/* drivers: missing internal dependency "rawdev" 00:05:11.376 Message: Disabling regex/* drivers: missing internal dependency "regexdev" 00:05:11.376 Message: Disabling ml/* drivers: missing internal dependency "mldev" 00:05:11.376 Message: Disabling event/* drivers: missing internal dependency "eventdev" 00:05:11.376 Message: Disabling baseband/* drivers: missing internal dependency "bbdev" 00:05:11.376 Message: Disabling gpu/* drivers: missing internal dependency "gpudev" 00:05:11.376 Program doxygen found: YES (/usr/local/bin/doxygen) 00:05:11.376 Configuring doxy-api-html.conf using configuration 00:05:11.376 Configuring doxy-api-man.conf using configuration 00:05:11.376 Program mandb found: YES (/usr/bin/mandb) 00:05:11.376 Program sphinx-build found: NO 00:05:11.376 Configuring rte_build_config.h using configuration 00:05:11.376 Message: 00:05:11.376 ================= 00:05:11.376 Applications Enabled 00:05:11.376 ================= 00:05:11.376 00:05:11.376 apps: 00:05:11.376 00:05:11.376 00:05:11.376 Message: 00:05:11.376 ================= 00:05:11.376 Libraries Enabled 00:05:11.376 ================= 00:05:11.376 00:05:11.376 libs: 00:05:11.376 log, kvargs, telemetry, eal, ring, rcu, mempool, mbuf, 00:05:11.376 net, meter, ethdev, pci, cmdline, hash, timer, compressdev, 00:05:11.376 cryptodev, dmadev, power, reorder, security, vhost, 00:05:11.376 00:05:11.376 Message: 00:05:11.376 =============== 00:05:11.376 Drivers Enabled 00:05:11.376 =============== 00:05:11.376 00:05:11.376 common: 00:05:11.376 00:05:11.376 bus: 00:05:11.376 pci, vdev, 00:05:11.376 mempool: 00:05:11.376 ring, 00:05:11.376 dma: 00:05:11.376 00:05:11.376 net: 00:05:11.376 00:05:11.376 crypto: 00:05:11.376 00:05:11.376 compress: 00:05:11.376 00:05:11.376 vdpa: 00:05:11.376 00:05:11.376 00:05:11.376 Message: 00:05:11.376 ================= 00:05:11.376 Content Skipped 00:05:11.376 ================= 00:05:11.376 00:05:11.376 apps: 00:05:11.376 dumpcap: explicitly disabled via build config 00:05:11.376 graph: explicitly disabled via build config 00:05:11.376 pdump: explicitly disabled via build config 00:05:11.376 proc-info: explicitly disabled via build config 00:05:11.376 test-acl: explicitly disabled via build config 00:05:11.376 test-bbdev: explicitly disabled via build config 00:05:11.376 test-cmdline: explicitly disabled via build config 00:05:11.376 test-compress-perf: explicitly disabled via build config 00:05:11.376 test-crypto-perf: explicitly disabled via build config 00:05:11.376 test-dma-perf: explicitly disabled via build config 00:05:11.376 test-eventdev: explicitly disabled via build config 00:05:11.376 test-fib: explicitly disabled via build config 00:05:11.376 test-flow-perf: explicitly disabled via build config 00:05:11.376 test-gpudev: explicitly disabled via build config 00:05:11.376 test-mldev: explicitly disabled via build config 00:05:11.376 test-pipeline: explicitly disabled via build config 00:05:11.376 test-pmd: explicitly disabled via build config 00:05:11.376 test-regex: explicitly disabled via build config 00:05:11.376 test-sad: explicitly disabled via build config 00:05:11.376 test-security-perf: explicitly disabled via build config 00:05:11.376 00:05:11.376 libs: 00:05:11.376 argparse: explicitly disabled via build config 00:05:11.376 metrics: explicitly disabled via build config 00:05:11.376 acl: explicitly disabled via build config 00:05:11.376 bbdev: explicitly disabled via build config 00:05:11.376 bitratestats: explicitly disabled via build config 00:05:11.376 bpf: explicitly disabled via build config 00:05:11.376 cfgfile: explicitly disabled via build config 00:05:11.376 distributor: explicitly disabled via build config 00:05:11.376 efd: explicitly disabled via build config 00:05:11.376 eventdev: explicitly disabled via build config 00:05:11.376 dispatcher: explicitly disabled via build config 00:05:11.376 gpudev: explicitly disabled via build config 00:05:11.376 gro: explicitly disabled via build config 00:05:11.376 gso: explicitly disabled via build config 00:05:11.376 ip_frag: explicitly disabled via build config 00:05:11.376 jobstats: explicitly disabled via build config 00:05:11.376 latencystats: explicitly disabled via build config 00:05:11.376 lpm: explicitly disabled via build config 00:05:11.376 member: explicitly disabled via build config 00:05:11.376 pcapng: explicitly disabled via build config 00:05:11.376 rawdev: explicitly disabled via build config 00:05:11.376 regexdev: explicitly disabled via build config 00:05:11.376 mldev: explicitly disabled via build config 00:05:11.376 rib: explicitly disabled via build config 00:05:11.376 sched: explicitly disabled via build config 00:05:11.376 stack: explicitly disabled via build config 00:05:11.376 ipsec: explicitly disabled via build config 00:05:11.376 pdcp: explicitly disabled via build config 00:05:11.376 fib: explicitly disabled via build config 00:05:11.376 port: explicitly disabled via build config 00:05:11.376 pdump: explicitly disabled via build config 00:05:11.376 table: explicitly disabled via build config 00:05:11.376 pipeline: explicitly disabled via build config 00:05:11.376 graph: explicitly disabled via build config 00:05:11.376 node: explicitly disabled via build config 00:05:11.376 00:05:11.376 drivers: 00:05:11.376 common/cpt: not in enabled drivers build config 00:05:11.376 common/dpaax: not in enabled drivers build config 00:05:11.376 common/iavf: not in enabled drivers build config 00:05:11.376 common/idpf: not in enabled drivers build config 00:05:11.376 common/ionic: not in enabled drivers build config 00:05:11.376 common/mvep: not in enabled drivers build config 00:05:11.376 common/octeontx: not in enabled drivers build config 00:05:11.376 bus/auxiliary: not in enabled drivers build config 00:05:11.376 bus/cdx: not in enabled drivers build config 00:05:11.376 bus/dpaa: not in enabled drivers build config 00:05:11.376 bus/fslmc: not in enabled drivers build config 00:05:11.376 bus/ifpga: not in enabled drivers build config 00:05:11.376 bus/platform: not in enabled drivers build config 00:05:11.376 bus/uacce: not in enabled drivers build config 00:05:11.376 bus/vmbus: not in enabled drivers build config 00:05:11.376 common/cnxk: not in enabled drivers build config 00:05:11.376 common/mlx5: not in enabled drivers build config 00:05:11.376 common/nfp: not in enabled drivers build config 00:05:11.376 common/nitrox: not in enabled drivers build config 00:05:11.376 common/qat: not in enabled drivers build config 00:05:11.376 common/sfc_efx: not in enabled drivers build config 00:05:11.376 mempool/bucket: not in enabled drivers build config 00:05:11.376 mempool/cnxk: not in enabled drivers build config 00:05:11.376 mempool/dpaa: not in enabled drivers build config 00:05:11.376 mempool/dpaa2: not in enabled drivers build config 00:05:11.376 mempool/octeontx: not in enabled drivers build config 00:05:11.376 mempool/stack: not in enabled drivers build config 00:05:11.376 dma/cnxk: not in enabled drivers build config 00:05:11.376 dma/dpaa: not in enabled drivers build config 00:05:11.377 dma/dpaa2: not in enabled drivers build config 00:05:11.377 dma/hisilicon: not in enabled drivers build config 00:05:11.377 dma/idxd: not in enabled drivers build config 00:05:11.377 dma/ioat: not in enabled drivers build config 00:05:11.377 dma/skeleton: not in enabled drivers build config 00:05:11.377 net/af_packet: not in enabled drivers build config 00:05:11.377 net/af_xdp: not in enabled drivers build config 00:05:11.377 net/ark: not in enabled drivers build config 00:05:11.377 net/atlantic: not in enabled drivers build config 00:05:11.377 net/avp: not in enabled drivers build config 00:05:11.377 net/axgbe: not in enabled drivers build config 00:05:11.377 net/bnx2x: not in enabled drivers build config 00:05:11.377 net/bnxt: not in enabled drivers build config 00:05:11.377 net/bonding: not in enabled drivers build config 00:05:11.377 net/cnxk: not in enabled drivers build config 00:05:11.377 net/cpfl: not in enabled drivers build config 00:05:11.377 net/cxgbe: not in enabled drivers build config 00:05:11.377 net/dpaa: not in enabled drivers build config 00:05:11.377 net/dpaa2: not in enabled drivers build config 00:05:11.377 net/e1000: not in enabled drivers build config 00:05:11.377 net/ena: not in enabled drivers build config 00:05:11.377 net/enetc: not in enabled drivers build config 00:05:11.377 net/enetfec: not in enabled drivers build config 00:05:11.377 net/enic: not in enabled drivers build config 00:05:11.377 net/failsafe: not in enabled drivers build config 00:05:11.377 net/fm10k: not in enabled drivers build config 00:05:11.377 net/gve: not in enabled drivers build config 00:05:11.377 net/hinic: not in enabled drivers build config 00:05:11.377 net/hns3: not in enabled drivers build config 00:05:11.377 net/i40e: not in enabled drivers build config 00:05:11.377 net/iavf: not in enabled drivers build config 00:05:11.377 net/ice: not in enabled drivers build config 00:05:11.377 net/idpf: not in enabled drivers build config 00:05:11.377 net/igc: not in enabled drivers build config 00:05:11.377 net/ionic: not in enabled drivers build config 00:05:11.377 net/ipn3ke: not in enabled drivers build config 00:05:11.377 net/ixgbe: not in enabled drivers build config 00:05:11.377 net/mana: not in enabled drivers build config 00:05:11.377 net/memif: not in enabled drivers build config 00:05:11.377 net/mlx4: not in enabled drivers build config 00:05:11.377 net/mlx5: not in enabled drivers build config 00:05:11.377 net/mvneta: not in enabled drivers build config 00:05:11.377 net/mvpp2: not in enabled drivers build config 00:05:11.377 net/netvsc: not in enabled drivers build config 00:05:11.377 net/nfb: not in enabled drivers build config 00:05:11.377 net/nfp: not in enabled drivers build config 00:05:11.377 net/ngbe: not in enabled drivers build config 00:05:11.377 net/null: not in enabled drivers build config 00:05:11.377 net/octeontx: not in enabled drivers build config 00:05:11.377 net/octeon_ep: not in enabled drivers build config 00:05:11.377 net/pcap: not in enabled drivers build config 00:05:11.377 net/pfe: not in enabled drivers build config 00:05:11.377 net/qede: not in enabled drivers build config 00:05:11.377 net/ring: not in enabled drivers build config 00:05:11.377 net/sfc: not in enabled drivers build config 00:05:11.377 net/softnic: not in enabled drivers build config 00:05:11.377 net/tap: not in enabled drivers build config 00:05:11.377 net/thunderx: not in enabled drivers build config 00:05:11.377 net/txgbe: not in enabled drivers build config 00:05:11.377 net/vdev_netvsc: not in enabled drivers build config 00:05:11.377 net/vhost: not in enabled drivers build config 00:05:11.377 net/virtio: not in enabled drivers build config 00:05:11.377 net/vmxnet3: not in enabled drivers build config 00:05:11.377 raw/*: missing internal dependency, "rawdev" 00:05:11.377 crypto/armv8: not in enabled drivers build config 00:05:11.377 crypto/bcmfs: not in enabled drivers build config 00:05:11.377 crypto/caam_jr: not in enabled drivers build config 00:05:11.377 crypto/ccp: not in enabled drivers build config 00:05:11.377 crypto/cnxk: not in enabled drivers build config 00:05:11.377 crypto/dpaa_sec: not in enabled drivers build config 00:05:11.377 crypto/dpaa2_sec: not in enabled drivers build config 00:05:11.377 crypto/ipsec_mb: not in enabled drivers build config 00:05:11.377 crypto/mlx5: not in enabled drivers build config 00:05:11.377 crypto/mvsam: not in enabled drivers build config 00:05:11.377 crypto/nitrox: not in enabled drivers build config 00:05:11.377 crypto/null: not in enabled drivers build config 00:05:11.377 crypto/octeontx: not in enabled drivers build config 00:05:11.377 crypto/openssl: not in enabled drivers build config 00:05:11.377 crypto/scheduler: not in enabled drivers build config 00:05:11.377 crypto/uadk: not in enabled drivers build config 00:05:11.377 crypto/virtio: not in enabled drivers build config 00:05:11.377 compress/isal: not in enabled drivers build config 00:05:11.377 compress/mlx5: not in enabled drivers build config 00:05:11.377 compress/nitrox: not in enabled drivers build config 00:05:11.377 compress/octeontx: not in enabled drivers build config 00:05:11.377 compress/zlib: not in enabled drivers build config 00:05:11.377 regex/*: missing internal dependency, "regexdev" 00:05:11.377 ml/*: missing internal dependency, "mldev" 00:05:11.377 vdpa/ifc: not in enabled drivers build config 00:05:11.377 vdpa/mlx5: not in enabled drivers build config 00:05:11.377 vdpa/nfp: not in enabled drivers build config 00:05:11.377 vdpa/sfc: not in enabled drivers build config 00:05:11.377 event/*: missing internal dependency, "eventdev" 00:05:11.377 baseband/*: missing internal dependency, "bbdev" 00:05:11.377 gpu/*: missing internal dependency, "gpudev" 00:05:11.377 00:05:11.377 00:05:11.377 Build targets in project: 85 00:05:11.377 00:05:11.377 DPDK 24.03.0 00:05:11.377 00:05:11.377 User defined options 00:05:11.377 buildtype : debug 00:05:11.377 default_library : shared 00:05:11.377 libdir : lib 00:05:11.377 prefix : /home/vagrant/spdk_repo/spdk/dpdk/build 00:05:11.377 b_sanitize : address 00:05:11.377 c_args : -Wno-stringop-overflow -fcommon -Wno-stringop-overread -Wno-array-bounds -fPIC -Werror 00:05:11.377 c_link_args : 00:05:11.377 cpu_instruction_set: native 00:05:11.377 disable_apps : dumpcap,graph,pdump,proc-info,test-acl,test-bbdev,test-cmdline,test-compress-perf,test-crypto-perf,test-dma-perf,test-eventdev,test-fib,test-flow-perf,test-gpudev,test-mldev,test-pipeline,test-pmd,test-regex,test-sad,test-security-perf,test 00:05:11.377 disable_libs : acl,argparse,bbdev,bitratestats,bpf,cfgfile,dispatcher,distributor,efd,eventdev,fib,gpudev,graph,gro,gso,ip_frag,ipsec,jobstats,latencystats,lpm,member,metrics,mldev,node,pcapng,pdcp,pdump,pipeline,port,rawdev,regexdev,rib,sched,stack,table 00:05:11.377 enable_docs : false 00:05:11.377 enable_drivers : bus,bus/pci,bus/vdev,mempool/ring,power/acpi,power/amd_pstate,power/cppc,power/intel_pstate,power/intel_uncore,power/kvm_vm 00:05:11.377 enable_kmods : false 00:05:11.377 max_lcores : 128 00:05:11.377 tests : false 00:05:11.377 00:05:11.377 Found ninja-1.11.1.git.kitware.jobserver-1 at /usr/local/bin/ninja 00:05:11.377 ninja: Entering directory `/home/vagrant/spdk_repo/spdk/dpdk/build-tmp' 00:05:11.377 [1/268] Compiling C object lib/librte_log.a.p/log_log_linux.c.o 00:05:11.377 [2/268] Compiling C object lib/librte_log.a.p/log_log.c.o 00:05:11.377 [3/268] Compiling C object lib/librte_kvargs.a.p/kvargs_rte_kvargs.c.o 00:05:11.377 [4/268] Linking static target lib/librte_log.a 00:05:11.377 [5/268] Linking static target lib/librte_kvargs.a 00:05:11.377 [6/268] Compiling C object lib/librte_telemetry.a.p/telemetry_telemetry_data.c.o 00:05:11.636 [7/268] Compiling C object lib/librte_telemetry.a.p/telemetry_telemetry_legacy.c.o 00:05:11.636 [8/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_bus.c.o 00:05:11.894 [9/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_config.c.o 00:05:11.894 [10/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_errno.c.o 00:05:11.894 [11/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_debug.c.o 00:05:11.894 [12/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_class.c.o 00:05:11.894 [13/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_dev.c.o 00:05:11.894 [14/268] Generating lib/kvargs.sym_chk with a custom command (wrapped by meson to capture output) 00:05:11.895 [15/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_hexdump.c.o 00:05:11.895 [16/268] Compiling C object lib/librte_telemetry.a.p/telemetry_telemetry.c.o 00:05:11.895 [17/268] Linking static target lib/librte_telemetry.a 00:05:12.153 [18/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_devargs.c.o 00:05:12.425 [19/268] Generating lib/log.sym_chk with a custom command (wrapped by meson to capture output) 00:05:12.425 [20/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_interrupts.c.o 00:05:12.425 [21/268] Linking target lib/librte_log.so.24.1 00:05:12.425 [22/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_lcore.c.o 00:05:12.425 [23/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_launch.c.o 00:05:12.425 [24/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_mcfg.c.o 00:05:12.425 [25/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_string_fns.c.o 00:05:12.684 [26/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_memalloc.c.o 00:05:12.684 [27/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_fbarray.c.o 00:05:12.684 [28/268] Generating symbol file lib/librte_log.so.24.1.p/librte_log.so.24.1.symbols 00:05:12.684 [29/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_memzone.c.o 00:05:12.684 [30/268] Linking target lib/librte_kvargs.so.24.1 00:05:12.684 [31/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_memory.c.o 00:05:12.684 [32/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_uuid.c.o 00:05:12.943 [33/268] Generating lib/telemetry.sym_chk with a custom command (wrapped by meson to capture output) 00:05:12.943 [34/268] Linking target lib/librte_telemetry.so.24.1 00:05:12.943 [35/268] Generating symbol file lib/librte_kvargs.so.24.1.p/librte_kvargs.so.24.1.symbols 00:05:12.943 [36/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_tailqs.c.o 00:05:13.203 [37/268] Compiling C object lib/librte_eal.a.p/eal_common_rte_reciprocal.c.o 00:05:13.203 [38/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_trace_points.c.o 00:05:13.203 [39/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_timer.c.o 00:05:13.203 [40/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_thread.c.o 00:05:13.203 [41/268] Compiling C object lib/librte_eal.a.p/eal_common_rte_version.c.o 00:05:13.203 [42/268] Generating symbol file lib/librte_telemetry.so.24.1.p/librte_telemetry.so.24.1.symbols 00:05:13.203 [43/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_cpuflags.c.o 00:05:13.203 [44/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_hypervisor.c.o 00:05:13.203 [45/268] Compiling C object lib/librte_eal.a.p/eal_common_malloc_elem.c.o 00:05:13.461 [46/268] Compiling C object lib/librte_eal.a.p/eal_common_rte_malloc.c.o 00:05:13.461 [47/268] Compiling C object lib/librte_eal.a.p/eal_common_malloc_heap.c.o 00:05:13.720 [48/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_options.c.o 00:05:13.720 [49/268] Compiling C object lib/librte_eal.a.p/eal_common_rte_random.c.o 00:05:13.720 [50/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_dynmem.c.o 00:05:13.720 [51/268] Compiling C object lib/librte_eal.a.p/eal_unix_eal_debug.c.o 00:05:13.720 [52/268] Compiling C object lib/librte_eal.a.p/eal_common_rte_service.c.o 00:05:13.979 [53/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_trace_ctf.c.o 00:05:13.979 [54/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_trace.c.o 00:05:13.979 [55/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_trace_utils.c.o 00:05:13.979 [56/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_proc.c.o 00:05:14.238 [57/268] Compiling C object lib/librte_eal.a.p/eal_common_hotplug_mp.c.o 00:05:14.238 [58/268] Compiling C object lib/librte_eal.a.p/eal_common_rte_keepalive.c.o 00:05:14.238 [59/268] Compiling C object lib/librte_eal.a.p/eal_unix_eal_file.c.o 00:05:14.238 [60/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_cpuflags.c.o 00:05:14.238 [61/268] Compiling C object lib/librte_eal.a.p/eal_common_malloc_mp.c.o 00:05:14.238 [62/268] Compiling C object lib/librte_eal.a.p/eal_unix_eal_firmware.c.o 00:05:14.238 [63/268] Compiling C object lib/librte_eal.a.p/eal_unix_eal_filesystem.c.o 00:05:14.498 [64/268] Compiling C object lib/librte_eal.a.p/eal_unix_eal_unix_thread.c.o 00:05:14.498 [65/268] Compiling C object lib/librte_eal.a.p/eal_unix_eal_unix_memory.c.o 00:05:14.498 [66/268] Compiling C object lib/librte_eal.a.p/eal_unix_eal_unix_timer.c.o 00:05:14.756 [67/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_alarm.c.o 00:05:14.756 [68/268] Compiling C object lib/librte_eal.a.p/eal_unix_rte_thread.c.o 00:05:14.756 [69/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_hugepage_info.c.o 00:05:15.014 [70/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_dev.c.o 00:05:15.014 [71/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_lcore.c.o 00:05:15.014 [72/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal.c.o 00:05:15.014 [73/268] Compiling C object lib/librte_eal.a.p/eal_x86_rte_hypervisor.c.o 00:05:15.014 [74/268] Compiling C object lib/librte_eal.a.p/eal_x86_rte_cpuflags.c.o 00:05:15.014 [75/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_interrupts.c.o 00:05:15.014 [76/268] Compiling C object lib/librte_eal.a.p/eal_x86_rte_spinlock.c.o 00:05:15.014 [77/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_thread.c.o 00:05:15.272 [78/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_timer.c.o 00:05:15.272 [79/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_memalloc.c.o 00:05:15.272 [80/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_vfio_mp_sync.c.o 00:05:15.530 [81/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_memory.c.o 00:05:15.530 [82/268] Compiling C object lib/librte_eal.a.p/eal_x86_rte_cycles.c.o 00:05:15.530 [83/268] Compiling C object lib/librte_eal.a.p/eal_x86_rte_power_intrinsics.c.o 00:05:15.530 [84/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_vfio.c.o 00:05:15.789 [85/268] Linking static target lib/librte_eal.a 00:05:15.789 [86/268] Compiling C object lib/librte_mempool.a.p/mempool_mempool_trace_points.c.o 00:05:15.789 [87/268] Compiling C object lib/librte_mempool.a.p/mempool_rte_mempool_ops.c.o 00:05:15.789 [88/268] Compiling C object lib/librte_ring.a.p/ring_rte_ring.c.o 00:05:15.789 [89/268] Linking static target lib/librte_ring.a 00:05:15.789 [90/268] Compiling C object lib/librte_mbuf.a.p/mbuf_rte_mbuf_ptype.c.o 00:05:15.789 [91/268] Compiling C object lib/librte_mempool.a.p/mempool_rte_mempool.c.o 00:05:15.789 [92/268] Compiling C object lib/librte_mbuf.a.p/mbuf_rte_mbuf_pool_ops.c.o 00:05:16.048 [93/268] Compiling C object lib/librte_mempool.a.p/mempool_rte_mempool_ops_default.c.o 00:05:16.048 [94/268] Linking static target lib/librte_mempool.a 00:05:16.048 [95/268] Compiling C object lib/librte_mbuf.a.p/mbuf_rte_mbuf_dyn.c.o 00:05:16.048 [96/268] Compiling C object lib/librte_rcu.a.p/rcu_rte_rcu_qsbr.c.o 00:05:16.048 [97/268] Linking static target lib/librte_rcu.a 00:05:16.307 [98/268] Compiling C object lib/net/libnet_crc_avx512_lib.a.p/net_crc_avx512.c.o 00:05:16.307 [99/268] Linking static target lib/net/libnet_crc_avx512_lib.a 00:05:16.307 [100/268] Generating lib/ring.sym_chk with a custom command (wrapped by meson to capture output) 00:05:16.307 [101/268] Compiling C object lib/librte_net.a.p/net_rte_ether.c.o 00:05:16.307 [102/268] Compiling C object lib/librte_net.a.p/net_rte_net.c.o 00:05:16.580 [103/268] Compiling C object lib/librte_net.a.p/net_rte_arp.c.o 00:05:16.580 [104/268] Compiling C object lib/librte_net.a.p/net_rte_net_crc.c.o 00:05:16.580 [105/268] Compiling C object lib/librte_mbuf.a.p/mbuf_rte_mbuf.c.o 00:05:16.580 [106/268] Linking static target lib/librte_mbuf.a 00:05:16.580 [107/268] Generating lib/rcu.sym_chk with a custom command (wrapped by meson to capture output) 00:05:16.580 [108/268] Compiling C object lib/librte_net.a.p/net_net_crc_sse.c.o 00:05:16.838 [109/268] Linking static target lib/librte_net.a 00:05:16.838 [110/268] Compiling C object lib/librte_meter.a.p/meter_rte_meter.c.o 00:05:16.838 [111/268] Linking static target lib/librte_meter.a 00:05:16.838 [112/268] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_profile.c.o 00:05:16.838 [113/268] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_private.c.o 00:05:17.097 [114/268] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_driver.c.o 00:05:17.097 [115/268] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_class_eth.c.o 00:05:17.097 [116/268] Generating lib/net.sym_chk with a custom command (wrapped by meson to capture output) 00:05:17.097 [117/268] Generating lib/mempool.sym_chk with a custom command (wrapped by meson to capture output) 00:05:17.367 [118/268] Generating lib/meter.sym_chk with a custom command (wrapped by meson to capture output) 00:05:17.367 [119/268] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_trace_points.c.o 00:05:17.625 [120/268] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_ethdev_cman.c.o 00:05:17.625 [121/268] Generating lib/mbuf.sym_chk with a custom command (wrapped by meson to capture output) 00:05:17.883 [122/268] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_ethdev_telemetry.c.o 00:05:17.883 [123/268] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_mtr.c.o 00:05:17.883 [124/268] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_common.c.o 00:05:17.883 [125/268] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_telemetry.c.o 00:05:18.143 [126/268] Compiling C object lib/librte_pci.a.p/pci_rte_pci.c.o 00:05:18.143 [127/268] Linking static target lib/librte_pci.a 00:05:18.143 [128/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline.c.o 00:05:18.143 [129/268] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_8079.c.o 00:05:18.143 [130/268] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_8472.c.o 00:05:18.143 [131/268] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_linux_ethtool.c.o 00:05:18.401 [132/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_cirbuf.c.o 00:05:18.401 [133/268] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_tm.c.o 00:05:18.401 [134/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_portlist.c.o 00:05:18.401 [135/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse.c.o 00:05:18.401 [136/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_num.c.o 00:05:18.401 [137/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_socket.c.o 00:05:18.401 [138/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_string.c.o 00:05:18.401 [139/268] Generating lib/pci.sym_chk with a custom command (wrapped by meson to capture output) 00:05:18.662 [140/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_os_unix.c.o 00:05:18.662 [141/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_rdline.c.o 00:05:18.662 [142/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_vt100.c.o 00:05:18.662 [143/268] Compiling C object lib/librte_hash.a.p/hash_rte_hash_crc.c.o 00:05:18.662 [144/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_etheraddr.c.o 00:05:18.662 [145/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_ipaddr.c.o 00:05:18.662 [146/268] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_8636.c.o 00:05:18.662 [147/268] Linking static target lib/librte_cmdline.a 00:05:18.921 [148/268] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_flow.c.o 00:05:18.921 [149/268] Compiling C object lib/librte_hash.a.p/hash_rte_fbk_hash.c.o 00:05:18.921 [150/268] Compiling C object lib/librte_hash.a.p/hash_rte_thash_gfni.c.o 00:05:19.179 [151/268] Compiling C object lib/librte_timer.a.p/timer_rte_timer.c.o 00:05:19.179 [152/268] Linking static target lib/librte_timer.a 00:05:19.179 [153/268] Compiling C object lib/librte_compressdev.a.p/compressdev_rte_compressdev.c.o 00:05:19.436 [154/268] Compiling C object lib/librte_compressdev.a.p/compressdev_rte_compressdev_pmd.c.o 00:05:19.437 [155/268] Compiling C object lib/librte_cryptodev.a.p/cryptodev_cryptodev_pmd.c.o 00:05:19.437 [156/268] Compiling C object lib/librte_hash.a.p/hash_rte_thash.c.o 00:05:19.696 [157/268] Compiling C object lib/librte_cryptodev.a.p/cryptodev_cryptodev_trace_points.c.o 00:05:19.696 [158/268] Compiling C object lib/librte_hash.a.p/hash_rte_cuckoo_hash.c.o 00:05:19.696 [159/268] Linking static target lib/librte_hash.a 00:05:19.696 [160/268] Compiling C object lib/librte_compressdev.a.p/compressdev_rte_comp.c.o 00:05:19.696 [161/268] Linking static target lib/librte_compressdev.a 00:05:19.696 [162/268] Compiling C object lib/librte_power.a.p/power_guest_channel.c.o 00:05:19.956 [163/268] Generating lib/timer.sym_chk with a custom command (wrapped by meson to capture output) 00:05:19.956 [164/268] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_ethdev.c.o 00:05:19.956 [165/268] Linking static target lib/librte_ethdev.a 00:05:19.956 [166/268] Compiling C object lib/librte_dmadev.a.p/dmadev_rte_dmadev.c.o 00:05:20.216 [167/268] Compiling C object lib/librte_dmadev.a.p/dmadev_rte_dmadev_trace_points.c.o 00:05:20.216 [168/268] Linking static target lib/librte_dmadev.a 00:05:20.216 [169/268] Compiling C object lib/librte_power.a.p/power_power_common.c.o 00:05:20.216 [170/268] Compiling C object lib/librte_power.a.p/power_power_amd_pstate_cpufreq.c.o 00:05:20.216 [171/268] Compiling C object lib/librte_power.a.p/power_power_acpi_cpufreq.c.o 00:05:20.216 [172/268] Generating lib/cmdline.sym_chk with a custom command (wrapped by meson to capture output) 00:05:20.474 [173/268] Compiling C object lib/librte_power.a.p/power_power_kvm_vm.c.o 00:05:20.474 [174/268] Compiling C object lib/librte_power.a.p/power_power_cppc_cpufreq.c.o 00:05:20.733 [175/268] Generating lib/compressdev.sym_chk with a custom command (wrapped by meson to capture output) 00:05:20.733 [176/268] Compiling C object lib/librte_power.a.p/power_power_intel_uncore.c.o 00:05:20.733 [177/268] Compiling C object lib/librte_power.a.p/power_power_pstate_cpufreq.c.o 00:05:20.733 [178/268] Compiling C object lib/librte_power.a.p/power_rte_power.c.o 00:05:20.992 [179/268] Compiling C object lib/librte_power.a.p/power_rte_power_uncore.c.o 00:05:20.992 [180/268] Generating lib/hash.sym_chk with a custom command (wrapped by meson to capture output) 00:05:20.992 [181/268] Compiling C object lib/librte_vhost.a.p/vhost_fd_man.c.o 00:05:20.992 [182/268] Generating lib/dmadev.sym_chk with a custom command (wrapped by meson to capture output) 00:05:20.992 [183/268] Compiling C object lib/librte_cryptodev.a.p/cryptodev_rte_cryptodev.c.o 00:05:21.251 [184/268] Linking static target lib/librte_cryptodev.a 00:05:21.251 [185/268] Compiling C object lib/librte_power.a.p/power_rte_power_pmd_mgmt.c.o 00:05:21.251 [186/268] Linking static target lib/librte_power.a 00:05:21.560 [187/268] Compiling C object lib/librte_vhost.a.p/vhost_vdpa.c.o 00:05:21.560 [188/268] Compiling C object lib/librte_vhost.a.p/vhost_socket.c.o 00:05:21.560 [189/268] Compiling C object lib/librte_reorder.a.p/reorder_rte_reorder.c.o 00:05:21.560 [190/268] Linking static target lib/librte_reorder.a 00:05:21.560 [191/268] Compiling C object lib/librte_security.a.p/security_rte_security.c.o 00:05:21.560 [192/268] Linking static target lib/librte_security.a 00:05:21.560 [193/268] Compiling C object lib/librte_vhost.a.p/vhost_iotlb.c.o 00:05:22.136 [194/268] Generating lib/reorder.sym_chk with a custom command (wrapped by meson to capture output) 00:05:22.136 [195/268] Compiling C object lib/librte_vhost.a.p/vhost_vhost.c.o 00:05:22.394 [196/268] Generating lib/security.sym_chk with a custom command (wrapped by meson to capture output) 00:05:22.394 [197/268] Compiling C object lib/librte_vhost.a.p/vhost_vhost_user.c.o 00:05:22.395 [198/268] Generating lib/power.sym_chk with a custom command (wrapped by meson to capture output) 00:05:22.652 [199/268] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_pci_params.c.o 00:05:22.652 [200/268] Compiling C object lib/librte_vhost.a.p/vhost_virtio_net_ctrl.c.o 00:05:22.652 [201/268] Compiling C object lib/librte_vhost.a.p/vhost_vduse.c.o 00:05:22.910 [202/268] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_pci_common_uio.c.o 00:05:22.910 [203/268] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_pci_common.c.o 00:05:23.169 [204/268] Compiling C object drivers/libtmp_rte_bus_vdev.a.p/bus_vdev_vdev_params.c.o 00:05:23.169 [205/268] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_linux_pci.c.o 00:05:23.169 [206/268] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_linux_pci_uio.c.o 00:05:23.428 [207/268] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_linux_pci_vfio.c.o 00:05:23.428 [208/268] Linking static target drivers/libtmp_rte_bus_pci.a 00:05:23.428 [209/268] Compiling C object drivers/libtmp_rte_bus_vdev.a.p/bus_vdev_vdev.c.o 00:05:23.428 [210/268] Linking static target drivers/libtmp_rte_bus_vdev.a 00:05:23.685 [211/268] Generating lib/cryptodev.sym_chk with a custom command (wrapped by meson to capture output) 00:05:23.685 [212/268] Generating drivers/rte_bus_vdev.pmd.c with a custom command 00:05:23.685 [213/268] Generating drivers/rte_bus_pci.pmd.c with a custom command 00:05:23.685 [214/268] Compiling C object drivers/librte_bus_vdev.so.24.1.p/meson-generated_.._rte_bus_vdev.pmd.c.o 00:05:23.685 [215/268] Compiling C object drivers/librte_bus_vdev.a.p/meson-generated_.._rte_bus_vdev.pmd.c.o 00:05:23.685 [216/268] Linking static target drivers/librte_bus_vdev.a 00:05:23.685 [217/268] Compiling C object drivers/librte_bus_pci.a.p/meson-generated_.._rte_bus_pci.pmd.c.o 00:05:23.685 [218/268] Compiling C object drivers/librte_bus_pci.so.24.1.p/meson-generated_.._rte_bus_pci.pmd.c.o 00:05:23.685 [219/268] Linking static target drivers/librte_bus_pci.a 00:05:23.685 [220/268] Compiling C object drivers/libtmp_rte_mempool_ring.a.p/mempool_ring_rte_mempool_ring.c.o 00:05:23.685 [221/268] Linking static target drivers/libtmp_rte_mempool_ring.a 00:05:23.943 [222/268] Generating drivers/rte_mempool_ring.pmd.c with a custom command 00:05:23.943 [223/268] Compiling C object drivers/librte_mempool_ring.a.p/meson-generated_.._rte_mempool_ring.pmd.c.o 00:05:23.943 [224/268] Linking static target drivers/librte_mempool_ring.a 00:05:23.943 [225/268] Compiling C object drivers/librte_mempool_ring.so.24.1.p/meson-generated_.._rte_mempool_ring.pmd.c.o 00:05:23.943 [226/268] Generating drivers/rte_bus_vdev.sym_chk with a custom command (wrapped by meson to capture output) 00:05:24.200 [227/268] Generating drivers/rte_bus_pci.sym_chk with a custom command (wrapped by meson to capture output) 00:05:25.131 [228/268] Compiling C object lib/librte_vhost.a.p/vhost_vhost_crypto.c.o 00:05:28.411 [229/268] Generating lib/eal.sym_chk with a custom command (wrapped by meson to capture output) 00:05:28.411 [230/268] Linking target lib/librte_eal.so.24.1 00:05:28.411 [231/268] Generating symbol file lib/librte_eal.so.24.1.p/librte_eal.so.24.1.symbols 00:05:28.670 [232/268] Linking target lib/librte_pci.so.24.1 00:05:28.670 [233/268] Linking target lib/librte_meter.so.24.1 00:05:28.670 [234/268] Linking target lib/librte_ring.so.24.1 00:05:28.670 [235/268] Linking target drivers/librte_bus_vdev.so.24.1 00:05:28.670 [236/268] Linking target lib/librte_dmadev.so.24.1 00:05:28.670 [237/268] Linking target lib/librte_timer.so.24.1 00:05:28.670 [238/268] Generating symbol file lib/librte_pci.so.24.1.p/librte_pci.so.24.1.symbols 00:05:28.670 [239/268] Linking target drivers/librte_bus_pci.so.24.1 00:05:28.670 [240/268] Generating symbol file lib/librte_dmadev.so.24.1.p/librte_dmadev.so.24.1.symbols 00:05:28.670 [241/268] Generating symbol file lib/librte_meter.so.24.1.p/librte_meter.so.24.1.symbols 00:05:28.670 [242/268] Generating symbol file lib/librte_timer.so.24.1.p/librte_timer.so.24.1.symbols 00:05:28.670 [243/268] Generating symbol file lib/librte_ring.so.24.1.p/librte_ring.so.24.1.symbols 00:05:28.929 [244/268] Compiling C object lib/librte_vhost.a.p/vhost_virtio_net.c.o 00:05:28.929 [245/268] Linking target lib/librte_rcu.so.24.1 00:05:28.929 [246/268] Linking target lib/librte_mempool.so.24.1 00:05:28.929 [247/268] Linking static target lib/librte_vhost.a 00:05:28.929 [248/268] Generating symbol file lib/librte_rcu.so.24.1.p/librte_rcu.so.24.1.symbols 00:05:28.929 [249/268] Generating symbol file lib/librte_mempool.so.24.1.p/librte_mempool.so.24.1.symbols 00:05:28.929 [250/268] Linking target lib/librte_mbuf.so.24.1 00:05:28.929 [251/268] Linking target drivers/librte_mempool_ring.so.24.1 00:05:29.188 [252/268] Generating lib/ethdev.sym_chk with a custom command (wrapped by meson to capture output) 00:05:29.188 [253/268] Generating symbol file lib/librte_mbuf.so.24.1.p/librte_mbuf.so.24.1.symbols 00:05:29.188 [254/268] Linking target lib/librte_cryptodev.so.24.1 00:05:29.188 [255/268] Linking target lib/librte_compressdev.so.24.1 00:05:29.188 [256/268] Linking target lib/librte_net.so.24.1 00:05:29.188 [257/268] Linking target lib/librte_reorder.so.24.1 00:05:29.448 [258/268] Generating symbol file lib/librte_net.so.24.1.p/librte_net.so.24.1.symbols 00:05:29.448 [259/268] Generating symbol file lib/librte_cryptodev.so.24.1.p/librte_cryptodev.so.24.1.symbols 00:05:29.448 [260/268] Linking target lib/librte_security.so.24.1 00:05:29.448 [261/268] Linking target lib/librte_cmdline.so.24.1 00:05:29.448 [262/268] Linking target lib/librte_hash.so.24.1 00:05:29.448 [263/268] Linking target lib/librte_ethdev.so.24.1 00:05:29.707 [264/268] Generating symbol file lib/librte_hash.so.24.1.p/librte_hash.so.24.1.symbols 00:05:29.707 [265/268] Generating symbol file lib/librte_ethdev.so.24.1.p/librte_ethdev.so.24.1.symbols 00:05:29.707 [266/268] Linking target lib/librte_power.so.24.1 00:05:31.082 [267/268] Generating lib/vhost.sym_chk with a custom command (wrapped by meson to capture output) 00:05:31.340 [268/268] Linking target lib/librte_vhost.so.24.1 00:05:31.340 INFO: autodetecting backend as ninja 00:05:31.340 INFO: calculating backend command to run: /usr/local/bin/ninja -C /home/vagrant/spdk_repo/spdk/dpdk/build-tmp -j 10 00:05:49.511 CC lib/log/log.o 00:05:49.511 CC lib/log/log_flags.o 00:05:49.511 CC lib/log/log_deprecated.o 00:05:49.511 CC lib/ut/ut.o 00:05:49.511 CC lib/ut_mock/mock.o 00:05:49.511 LIB libspdk_log.a 00:05:49.511 LIB libspdk_ut_mock.a 00:05:49.511 SO libspdk_log.so.7.1 00:05:49.511 LIB libspdk_ut.a 00:05:49.511 SO libspdk_ut_mock.so.6.0 00:05:49.511 SO libspdk_ut.so.2.0 00:05:49.511 SYMLINK libspdk_log.so 00:05:49.511 SYMLINK libspdk_ut_mock.so 00:05:49.769 SYMLINK libspdk_ut.so 00:05:49.769 CC lib/dma/dma.o 00:05:49.769 CC lib/util/base64.o 00:05:49.769 CC lib/util/bit_array.o 00:05:49.769 CC lib/ioat/ioat.o 00:05:49.769 CC lib/util/crc16.o 00:05:49.769 CC lib/util/crc32.o 00:05:49.769 CC lib/util/cpuset.o 00:05:49.769 CC lib/util/crc32c.o 00:05:49.769 CXX lib/trace_parser/trace.o 00:05:50.027 CC lib/vfio_user/host/vfio_user_pci.o 00:05:50.027 CC lib/vfio_user/host/vfio_user.o 00:05:50.027 CC lib/util/crc32_ieee.o 00:05:50.027 CC lib/util/crc64.o 00:05:50.027 CC lib/util/dif.o 00:05:50.027 CC lib/util/fd.o 00:05:50.027 CC lib/util/fd_group.o 00:05:50.027 LIB libspdk_dma.a 00:05:50.027 SO libspdk_dma.so.5.0 00:05:50.027 CC lib/util/file.o 00:05:50.285 CC lib/util/hexlify.o 00:05:50.285 LIB libspdk_ioat.a 00:05:50.285 SYMLINK libspdk_dma.so 00:05:50.285 CC lib/util/iov.o 00:05:50.285 SO libspdk_ioat.so.7.0 00:05:50.285 CC lib/util/math.o 00:05:50.285 CC lib/util/net.o 00:05:50.285 LIB libspdk_vfio_user.a 00:05:50.285 SYMLINK libspdk_ioat.so 00:05:50.285 CC lib/util/pipe.o 00:05:50.285 SO libspdk_vfio_user.so.5.0 00:05:50.285 CC lib/util/strerror_tls.o 00:05:50.285 CC lib/util/string.o 00:05:50.285 SYMLINK libspdk_vfio_user.so 00:05:50.285 CC lib/util/uuid.o 00:05:50.285 CC lib/util/xor.o 00:05:50.285 CC lib/util/zipf.o 00:05:50.285 CC lib/util/md5.o 00:05:50.850 LIB libspdk_util.a 00:05:50.850 LIB libspdk_trace_parser.a 00:05:50.850 SO libspdk_trace_parser.so.6.0 00:05:51.108 SO libspdk_util.so.10.1 00:05:51.108 SYMLINK libspdk_trace_parser.so 00:05:51.108 SYMLINK libspdk_util.so 00:05:51.366 CC lib/conf/conf.o 00:05:51.366 CC lib/json/json_util.o 00:05:51.366 CC lib/json/json_write.o 00:05:51.366 CC lib/json/json_parse.o 00:05:51.366 CC lib/idxd/idxd.o 00:05:51.366 CC lib/idxd/idxd_user.o 00:05:51.366 CC lib/idxd/idxd_kernel.o 00:05:51.366 CC lib/vmd/vmd.o 00:05:51.366 CC lib/env_dpdk/env.o 00:05:51.366 CC lib/rdma_utils/rdma_utils.o 00:05:51.624 CC lib/vmd/led.o 00:05:51.624 LIB libspdk_conf.a 00:05:51.624 CC lib/env_dpdk/memory.o 00:05:51.624 CC lib/env_dpdk/pci.o 00:05:51.624 SO libspdk_conf.so.6.0 00:05:51.624 CC lib/env_dpdk/init.o 00:05:51.624 LIB libspdk_rdma_utils.a 00:05:51.624 LIB libspdk_json.a 00:05:51.624 SYMLINK libspdk_conf.so 00:05:51.624 CC lib/env_dpdk/threads.o 00:05:51.624 SO libspdk_rdma_utils.so.1.0 00:05:51.624 SO libspdk_json.so.6.0 00:05:51.882 CC lib/env_dpdk/pci_ioat.o 00:05:51.882 SYMLINK libspdk_rdma_utils.so 00:05:51.882 CC lib/env_dpdk/pci_virtio.o 00:05:51.882 SYMLINK libspdk_json.so 00:05:51.882 CC lib/env_dpdk/pci_vmd.o 00:05:51.882 CC lib/env_dpdk/pci_idxd.o 00:05:51.882 CC lib/env_dpdk/pci_event.o 00:05:51.882 CC lib/env_dpdk/sigbus_handler.o 00:05:51.882 CC lib/env_dpdk/pci_dpdk.o 00:05:51.882 CC lib/env_dpdk/pci_dpdk_2207.o 00:05:52.140 CC lib/env_dpdk/pci_dpdk_2211.o 00:05:52.140 LIB libspdk_idxd.a 00:05:52.140 SO libspdk_idxd.so.12.1 00:05:52.140 LIB libspdk_vmd.a 00:05:52.140 SYMLINK libspdk_idxd.so 00:05:52.140 SO libspdk_vmd.so.6.0 00:05:52.397 CC lib/rdma_provider/common.o 00:05:52.397 CC lib/rdma_provider/rdma_provider_verbs.o 00:05:52.397 SYMLINK libspdk_vmd.so 00:05:52.397 CC lib/jsonrpc/jsonrpc_server.o 00:05:52.397 CC lib/jsonrpc/jsonrpc_server_tcp.o 00:05:52.397 CC lib/jsonrpc/jsonrpc_client.o 00:05:52.397 CC lib/jsonrpc/jsonrpc_client_tcp.o 00:05:52.397 LIB libspdk_rdma_provider.a 00:05:52.655 SO libspdk_rdma_provider.so.7.0 00:05:52.655 LIB libspdk_jsonrpc.a 00:05:52.655 SYMLINK libspdk_rdma_provider.so 00:05:52.655 SO libspdk_jsonrpc.so.6.0 00:05:52.655 SYMLINK libspdk_jsonrpc.so 00:05:53.223 LIB libspdk_env_dpdk.a 00:05:53.223 CC lib/rpc/rpc.o 00:05:53.223 SO libspdk_env_dpdk.so.15.1 00:05:53.485 LIB libspdk_rpc.a 00:05:53.485 SYMLINK libspdk_env_dpdk.so 00:05:53.485 SO libspdk_rpc.so.6.0 00:05:53.485 SYMLINK libspdk_rpc.so 00:05:54.053 CC lib/keyring/keyring.o 00:05:54.053 CC lib/keyring/keyring_rpc.o 00:05:54.053 CC lib/trace/trace.o 00:05:54.053 CC lib/trace/trace_rpc.o 00:05:54.053 CC lib/trace/trace_flags.o 00:05:54.053 CC lib/notify/notify.o 00:05:54.053 CC lib/notify/notify_rpc.o 00:05:54.053 LIB libspdk_notify.a 00:05:54.312 SO libspdk_notify.so.6.0 00:05:54.312 LIB libspdk_keyring.a 00:05:54.312 LIB libspdk_trace.a 00:05:54.312 SO libspdk_keyring.so.2.0 00:05:54.312 SYMLINK libspdk_notify.so 00:05:54.312 SO libspdk_trace.so.11.0 00:05:54.312 SYMLINK libspdk_keyring.so 00:05:54.312 SYMLINK libspdk_trace.so 00:05:54.877 CC lib/thread/thread.o 00:05:54.877 CC lib/thread/iobuf.o 00:05:54.877 CC lib/sock/sock.o 00:05:54.877 CC lib/sock/sock_rpc.o 00:05:55.445 LIB libspdk_sock.a 00:05:55.445 SO libspdk_sock.so.10.0 00:05:55.445 SYMLINK libspdk_sock.so 00:05:56.012 CC lib/nvme/nvme_ctrlr_cmd.o 00:05:56.012 CC lib/nvme/nvme_fabric.o 00:05:56.012 CC lib/nvme/nvme_ctrlr.o 00:05:56.012 CC lib/nvme/nvme_ns.o 00:05:56.012 CC lib/nvme/nvme_pcie_common.o 00:05:56.012 CC lib/nvme/nvme_ns_cmd.o 00:05:56.012 CC lib/nvme/nvme_pcie.o 00:05:56.012 CC lib/nvme/nvme.o 00:05:56.012 CC lib/nvme/nvme_qpair.o 00:05:56.580 CC lib/nvme/nvme_quirks.o 00:05:56.839 LIB libspdk_thread.a 00:05:56.839 CC lib/nvme/nvme_transport.o 00:05:56.839 CC lib/nvme/nvme_discovery.o 00:05:56.839 SO libspdk_thread.so.11.0 00:05:56.839 CC lib/nvme/nvme_ctrlr_ocssd_cmd.o 00:05:56.839 SYMLINK libspdk_thread.so 00:05:56.839 CC lib/nvme/nvme_ns_ocssd_cmd.o 00:05:56.839 CC lib/nvme/nvme_tcp.o 00:05:56.839 CC lib/nvme/nvme_opal.o 00:05:57.096 CC lib/nvme/nvme_io_msg.o 00:05:57.096 CC lib/nvme/nvme_poll_group.o 00:05:57.096 CC lib/nvme/nvme_zns.o 00:05:57.355 CC lib/nvme/nvme_stubs.o 00:05:57.355 CC lib/nvme/nvme_auth.o 00:05:57.614 CC lib/nvme/nvme_cuse.o 00:05:57.614 CC lib/nvme/nvme_rdma.o 00:05:57.614 CC lib/accel/accel.o 00:05:57.873 CC lib/accel/accel_rpc.o 00:05:57.873 CC lib/accel/accel_sw.o 00:05:57.873 CC lib/blob/blobstore.o 00:05:57.873 CC lib/blob/request.o 00:05:57.873 CC lib/blob/zeroes.o 00:05:58.132 CC lib/blob/blob_bs_dev.o 00:05:58.391 CC lib/init/json_config.o 00:05:58.391 CC lib/init/subsystem.o 00:05:58.391 CC lib/init/subsystem_rpc.o 00:05:58.391 CC lib/init/rpc.o 00:05:58.649 CC lib/virtio/virtio.o 00:05:58.649 CC lib/fsdev/fsdev.o 00:05:58.649 CC lib/virtio/virtio_vhost_user.o 00:05:58.649 CC lib/virtio/virtio_vfio_user.o 00:05:58.649 CC lib/virtio/virtio_pci.o 00:05:58.649 LIB libspdk_init.a 00:05:58.649 CC lib/fsdev/fsdev_io.o 00:05:58.649 SO libspdk_init.so.6.0 00:05:58.906 SYMLINK libspdk_init.so 00:05:58.906 CC lib/fsdev/fsdev_rpc.o 00:05:58.906 LIB libspdk_virtio.a 00:05:59.166 LIB libspdk_accel.a 00:05:59.166 SO libspdk_virtio.so.7.0 00:05:59.166 SO libspdk_accel.so.16.0 00:05:59.166 SYMLINK libspdk_virtio.so 00:05:59.166 SYMLINK libspdk_accel.so 00:05:59.166 CC lib/event/reactor.o 00:05:59.166 LIB libspdk_nvme.a 00:05:59.166 CC lib/event/app.o 00:05:59.166 CC lib/event/scheduler_static.o 00:05:59.166 CC lib/event/app_rpc.o 00:05:59.166 CC lib/event/log_rpc.o 00:05:59.424 CC lib/bdev/bdev_zone.o 00:05:59.424 CC lib/bdev/bdev.o 00:05:59.424 CC lib/bdev/bdev_rpc.o 00:05:59.424 SO libspdk_nvme.so.15.0 00:05:59.424 CC lib/bdev/part.o 00:05:59.794 LIB libspdk_fsdev.a 00:05:59.794 CC lib/bdev/scsi_nvme.o 00:05:59.794 SO libspdk_fsdev.so.2.0 00:05:59.794 SYMLINK libspdk_fsdev.so 00:05:59.794 LIB libspdk_event.a 00:05:59.794 SO libspdk_event.so.14.0 00:06:00.072 SYMLINK libspdk_nvme.so 00:06:00.072 CC lib/fuse_dispatcher/fuse_dispatcher.o 00:06:00.072 SYMLINK libspdk_event.so 00:06:01.006 LIB libspdk_fuse_dispatcher.a 00:06:01.006 SO libspdk_fuse_dispatcher.so.1.0 00:06:01.006 SYMLINK libspdk_fuse_dispatcher.so 00:06:01.940 LIB libspdk_blob.a 00:06:02.198 SO libspdk_blob.so.12.0 00:06:02.198 SYMLINK libspdk_blob.so 00:06:02.455 CC lib/lvol/lvol.o 00:06:02.716 CC lib/blobfs/blobfs.o 00:06:02.716 CC lib/blobfs/tree.o 00:06:02.975 LIB libspdk_bdev.a 00:06:03.232 SO libspdk_bdev.so.17.0 00:06:03.232 SYMLINK libspdk_bdev.so 00:06:03.492 CC lib/nbd/nbd.o 00:06:03.492 CC lib/ublk/ublk.o 00:06:03.492 CC lib/ublk/ublk_rpc.o 00:06:03.492 CC lib/nbd/nbd_rpc.o 00:06:03.492 CC lib/scsi/dev.o 00:06:03.492 CC lib/scsi/lun.o 00:06:03.751 CC lib/nvmf/ctrlr.o 00:06:03.751 CC lib/ftl/ftl_core.o 00:06:03.751 LIB libspdk_blobfs.a 00:06:03.751 LIB libspdk_lvol.a 00:06:03.751 SO libspdk_blobfs.so.11.0 00:06:03.751 CC lib/ftl/ftl_init.o 00:06:03.751 SO libspdk_lvol.so.11.0 00:06:04.010 CC lib/ftl/ftl_layout.o 00:06:04.010 CC lib/scsi/port.o 00:06:04.010 SYMLINK libspdk_lvol.so 00:06:04.010 CC lib/nvmf/ctrlr_discovery.o 00:06:04.010 SYMLINK libspdk_blobfs.so 00:06:04.010 CC lib/scsi/scsi.o 00:06:04.010 CC lib/scsi/scsi_bdev.o 00:06:04.269 CC lib/scsi/scsi_pr.o 00:06:04.269 CC lib/scsi/scsi_rpc.o 00:06:04.269 CC lib/scsi/task.o 00:06:04.269 CC lib/ftl/ftl_debug.o 00:06:04.269 LIB libspdk_nbd.a 00:06:04.269 CC lib/nvmf/ctrlr_bdev.o 00:06:04.269 SO libspdk_nbd.so.7.0 00:06:04.269 CC lib/ftl/ftl_io.o 00:06:04.544 SYMLINK libspdk_nbd.so 00:06:04.544 CC lib/nvmf/subsystem.o 00:06:04.544 CC lib/nvmf/nvmf.o 00:06:04.544 CC lib/ftl/ftl_sb.o 00:06:04.803 CC lib/ftl/ftl_l2p.o 00:06:04.803 CC lib/ftl/ftl_l2p_flat.o 00:06:04.803 LIB libspdk_ublk.a 00:06:04.803 SO libspdk_ublk.so.3.0 00:06:04.803 LIB libspdk_scsi.a 00:06:05.062 SYMLINK libspdk_ublk.so 00:06:05.062 CC lib/nvmf/nvmf_rpc.o 00:06:05.062 CC lib/ftl/ftl_nv_cache.o 00:06:05.062 SO libspdk_scsi.so.9.0 00:06:05.062 CC lib/ftl/ftl_band.o 00:06:05.062 CC lib/ftl/ftl_band_ops.o 00:06:05.062 CC lib/nvmf/transport.o 00:06:05.062 SYMLINK libspdk_scsi.so 00:06:05.062 CC lib/ftl/ftl_writer.o 00:06:05.671 CC lib/ftl/ftl_rq.o 00:06:05.671 CC lib/ftl/ftl_reloc.o 00:06:05.671 CC lib/ftl/ftl_l2p_cache.o 00:06:05.930 CC lib/ftl/ftl_p2l.o 00:06:05.930 CC lib/ftl/ftl_p2l_log.o 00:06:05.930 CC lib/ftl/mngt/ftl_mngt.o 00:06:05.930 CC lib/ftl/mngt/ftl_mngt_bdev.o 00:06:05.930 CC lib/nvmf/tcp.o 00:06:06.188 CC lib/nvmf/stubs.o 00:06:06.188 CC lib/nvmf/mdns_server.o 00:06:06.188 CC lib/nvmf/rdma.o 00:06:06.447 CC lib/ftl/mngt/ftl_mngt_shutdown.o 00:06:06.447 CC lib/ftl/mngt/ftl_mngt_startup.o 00:06:06.447 CC lib/ftl/mngt/ftl_mngt_md.o 00:06:06.447 CC lib/ftl/mngt/ftl_mngt_misc.o 00:06:06.447 CC lib/nvmf/auth.o 00:06:06.706 CC lib/ftl/mngt/ftl_mngt_ioch.o 00:06:06.706 CC lib/ftl/mngt/ftl_mngt_l2p.o 00:06:06.706 CC lib/iscsi/conn.o 00:06:06.706 CC lib/ftl/mngt/ftl_mngt_band.o 00:06:06.706 CC lib/ftl/mngt/ftl_mngt_self_test.o 00:06:06.706 CC lib/ftl/mngt/ftl_mngt_p2l.o 00:06:06.706 CC lib/ftl/mngt/ftl_mngt_recovery.o 00:06:06.967 CC lib/vhost/vhost.o 00:06:06.967 CC lib/iscsi/init_grp.o 00:06:06.967 CC lib/ftl/mngt/ftl_mngt_upgrade.o 00:06:06.967 CC lib/vhost/vhost_rpc.o 00:06:07.226 CC lib/vhost/vhost_scsi.o 00:06:07.226 CC lib/iscsi/iscsi.o 00:06:07.226 CC lib/iscsi/param.o 00:06:07.486 CC lib/iscsi/portal_grp.o 00:06:07.486 CC lib/iscsi/tgt_node.o 00:06:07.745 CC lib/ftl/utils/ftl_conf.o 00:06:07.745 CC lib/iscsi/iscsi_subsystem.o 00:06:07.745 CC lib/vhost/vhost_blk.o 00:06:07.745 CC lib/iscsi/iscsi_rpc.o 00:06:08.005 CC lib/iscsi/task.o 00:06:08.005 CC lib/vhost/rte_vhost_user.o 00:06:08.005 CC lib/ftl/utils/ftl_md.o 00:06:08.264 CC lib/ftl/utils/ftl_mempool.o 00:06:08.264 CC lib/ftl/utils/ftl_bitmap.o 00:06:08.264 CC lib/ftl/utils/ftl_property.o 00:06:08.264 CC lib/ftl/utils/ftl_layout_tracker_bdev.o 00:06:08.264 CC lib/ftl/upgrade/ftl_layout_upgrade.o 00:06:08.264 CC lib/ftl/upgrade/ftl_sb_upgrade.o 00:06:08.522 CC lib/ftl/upgrade/ftl_p2l_upgrade.o 00:06:08.522 CC lib/ftl/upgrade/ftl_band_upgrade.o 00:06:08.522 CC lib/ftl/upgrade/ftl_chunk_upgrade.o 00:06:08.522 CC lib/ftl/upgrade/ftl_trim_upgrade.o 00:06:08.522 CC lib/ftl/upgrade/ftl_sb_v3.o 00:06:08.781 CC lib/ftl/upgrade/ftl_sb_v5.o 00:06:08.781 CC lib/ftl/nvc/ftl_nvc_dev.o 00:06:08.781 CC lib/ftl/nvc/ftl_nvc_bdev_vss.o 00:06:08.781 CC lib/ftl/nvc/ftl_nvc_bdev_non_vss.o 00:06:08.781 CC lib/ftl/nvc/ftl_nvc_bdev_common.o 00:06:08.781 CC lib/ftl/base/ftl_base_dev.o 00:06:09.041 LIB libspdk_iscsi.a 00:06:09.041 CC lib/ftl/ftl_trace.o 00:06:09.041 CC lib/ftl/base/ftl_base_bdev.o 00:06:09.041 SO libspdk_iscsi.so.8.0 00:06:09.300 SYMLINK libspdk_iscsi.so 00:06:09.300 LIB libspdk_vhost.a 00:06:09.300 LIB libspdk_ftl.a 00:06:09.300 SO libspdk_vhost.so.8.0 00:06:09.602 LIB libspdk_nvmf.a 00:06:09.602 SYMLINK libspdk_vhost.so 00:06:09.602 SO libspdk_ftl.so.9.0 00:06:09.602 SO libspdk_nvmf.so.20.0 00:06:09.863 SYMLINK libspdk_nvmf.so 00:06:10.123 SYMLINK libspdk_ftl.so 00:06:10.383 CC module/env_dpdk/env_dpdk_rpc.o 00:06:10.383 CC module/accel/ioat/accel_ioat.o 00:06:10.383 CC module/accel/error/accel_error.o 00:06:10.383 CC module/keyring/linux/keyring.o 00:06:10.383 CC module/blob/bdev/blob_bdev.o 00:06:10.383 CC module/sock/posix/posix.o 00:06:10.383 CC module/accel/dsa/accel_dsa.o 00:06:10.383 CC module/scheduler/dynamic/scheduler_dynamic.o 00:06:10.383 CC module/keyring/file/keyring.o 00:06:10.642 CC module/fsdev/aio/fsdev_aio.o 00:06:10.642 LIB libspdk_env_dpdk_rpc.a 00:06:10.642 SO libspdk_env_dpdk_rpc.so.6.0 00:06:10.642 CC module/keyring/linux/keyring_rpc.o 00:06:10.642 SYMLINK libspdk_env_dpdk_rpc.so 00:06:10.642 CC module/keyring/file/keyring_rpc.o 00:06:10.642 CC module/accel/error/accel_error_rpc.o 00:06:10.642 LIB libspdk_scheduler_dynamic.a 00:06:10.642 CC module/accel/ioat/accel_ioat_rpc.o 00:06:10.642 SO libspdk_scheduler_dynamic.so.4.0 00:06:10.900 LIB libspdk_keyring_linux.a 00:06:10.900 SYMLINK libspdk_scheduler_dynamic.so 00:06:10.900 SO libspdk_keyring_linux.so.1.0 00:06:10.900 LIB libspdk_keyring_file.a 00:06:10.900 LIB libspdk_accel_error.a 00:06:10.900 SYMLINK libspdk_keyring_linux.so 00:06:10.900 SO libspdk_keyring_file.so.2.0 00:06:10.901 CC module/fsdev/aio/fsdev_aio_rpc.o 00:06:10.901 LIB libspdk_blob_bdev.a 00:06:10.901 LIB libspdk_accel_ioat.a 00:06:10.901 SO libspdk_accel_error.so.2.0 00:06:10.901 SO libspdk_blob_bdev.so.12.0 00:06:11.159 SYMLINK libspdk_keyring_file.so 00:06:11.159 CC module/accel/dsa/accel_dsa_rpc.o 00:06:11.159 CC module/fsdev/aio/linux_aio_mgr.o 00:06:11.159 SO libspdk_accel_ioat.so.6.0 00:06:11.159 CC module/accel/iaa/accel_iaa.o 00:06:11.159 SYMLINK libspdk_accel_error.so 00:06:11.159 CC module/scheduler/dpdk_governor/dpdk_governor.o 00:06:11.159 SYMLINK libspdk_blob_bdev.so 00:06:11.159 CC module/accel/iaa/accel_iaa_rpc.o 00:06:11.159 SYMLINK libspdk_accel_ioat.so 00:06:11.159 LIB libspdk_accel_dsa.a 00:06:11.159 SO libspdk_accel_dsa.so.5.0 00:06:11.417 LIB libspdk_scheduler_dpdk_governor.a 00:06:11.417 SYMLINK libspdk_accel_dsa.so 00:06:11.417 SO libspdk_scheduler_dpdk_governor.so.4.0 00:06:11.417 LIB libspdk_accel_iaa.a 00:06:11.417 SO libspdk_accel_iaa.so.3.0 00:06:11.417 CC module/scheduler/gscheduler/gscheduler.o 00:06:11.417 SYMLINK libspdk_scheduler_dpdk_governor.so 00:06:11.417 CC module/bdev/delay/vbdev_delay.o 00:06:11.417 SYMLINK libspdk_accel_iaa.so 00:06:11.417 CC module/bdev/delay/vbdev_delay_rpc.o 00:06:11.417 CC module/blobfs/bdev/blobfs_bdev.o 00:06:11.417 CC module/bdev/error/vbdev_error.o 00:06:11.417 LIB libspdk_fsdev_aio.a 00:06:11.417 CC module/bdev/gpt/gpt.o 00:06:11.417 SO libspdk_fsdev_aio.so.1.0 00:06:11.417 LIB libspdk_scheduler_gscheduler.a 00:06:11.675 CC module/bdev/lvol/vbdev_lvol.o 00:06:11.675 SO libspdk_scheduler_gscheduler.so.4.0 00:06:11.675 CC module/bdev/malloc/bdev_malloc.o 00:06:11.675 SYMLINK libspdk_fsdev_aio.so 00:06:11.675 CC module/bdev/malloc/bdev_malloc_rpc.o 00:06:11.675 CC module/bdev/gpt/vbdev_gpt.o 00:06:11.675 CC module/blobfs/bdev/blobfs_bdev_rpc.o 00:06:11.675 SYMLINK libspdk_scheduler_gscheduler.so 00:06:11.675 LIB libspdk_sock_posix.a 00:06:11.675 SO libspdk_sock_posix.so.6.0 00:06:11.675 CC module/bdev/error/vbdev_error_rpc.o 00:06:11.933 SYMLINK libspdk_sock_posix.so 00:06:11.933 CC module/bdev/null/bdev_null.o 00:06:11.933 LIB libspdk_blobfs_bdev.a 00:06:11.933 SO libspdk_blobfs_bdev.so.6.0 00:06:11.933 LIB libspdk_bdev_delay.a 00:06:11.933 CC module/bdev/nvme/bdev_nvme.o 00:06:11.933 SO libspdk_bdev_delay.so.6.0 00:06:11.933 LIB libspdk_bdev_gpt.a 00:06:11.933 LIB libspdk_bdev_error.a 00:06:11.933 SYMLINK libspdk_blobfs_bdev.so 00:06:11.933 CC module/bdev/lvol/vbdev_lvol_rpc.o 00:06:11.933 SO libspdk_bdev_gpt.so.6.0 00:06:11.933 CC module/bdev/raid/bdev_raid.o 00:06:11.933 SO libspdk_bdev_error.so.6.0 00:06:11.933 CC module/bdev/passthru/vbdev_passthru.o 00:06:11.933 SYMLINK libspdk_bdev_delay.so 00:06:11.933 CC module/bdev/null/bdev_null_rpc.o 00:06:11.933 LIB libspdk_bdev_malloc.a 00:06:12.214 SO libspdk_bdev_malloc.so.6.0 00:06:12.214 SYMLINK libspdk_bdev_gpt.so 00:06:12.214 SYMLINK libspdk_bdev_error.so 00:06:12.214 CC module/bdev/passthru/vbdev_passthru_rpc.o 00:06:12.214 SYMLINK libspdk_bdev_malloc.so 00:06:12.214 LIB libspdk_bdev_null.a 00:06:12.214 SO libspdk_bdev_null.so.6.0 00:06:12.214 CC module/bdev/split/vbdev_split.o 00:06:12.473 SYMLINK libspdk_bdev_null.so 00:06:12.473 CC module/bdev/zone_block/vbdev_zone_block.o 00:06:12.473 CC module/bdev/zone_block/vbdev_zone_block_rpc.o 00:06:12.473 CC module/bdev/aio/bdev_aio.o 00:06:12.473 CC module/bdev/xnvme/bdev_xnvme.o 00:06:12.473 LIB libspdk_bdev_lvol.a 00:06:12.473 LIB libspdk_bdev_passthru.a 00:06:12.473 SO libspdk_bdev_passthru.so.6.0 00:06:12.473 SO libspdk_bdev_lvol.so.6.0 00:06:12.473 CC module/bdev/ftl/bdev_ftl.o 00:06:12.473 SYMLINK libspdk_bdev_passthru.so 00:06:12.473 CC module/bdev/ftl/bdev_ftl_rpc.o 00:06:12.473 SYMLINK libspdk_bdev_lvol.so 00:06:12.473 CC module/bdev/raid/bdev_raid_rpc.o 00:06:12.473 CC module/bdev/split/vbdev_split_rpc.o 00:06:12.473 CC module/bdev/raid/bdev_raid_sb.o 00:06:12.732 CC module/bdev/xnvme/bdev_xnvme_rpc.o 00:06:12.732 LIB libspdk_bdev_zone_block.a 00:06:12.732 CC module/bdev/aio/bdev_aio_rpc.o 00:06:12.732 SO libspdk_bdev_zone_block.so.6.0 00:06:12.732 LIB libspdk_bdev_split.a 00:06:12.732 SO libspdk_bdev_split.so.6.0 00:06:12.732 SYMLINK libspdk_bdev_zone_block.so 00:06:12.732 LIB libspdk_bdev_ftl.a 00:06:12.732 CC module/bdev/raid/raid0.o 00:06:12.992 SYMLINK libspdk_bdev_split.so 00:06:12.992 CC module/bdev/raid/raid1.o 00:06:12.992 SO libspdk_bdev_ftl.so.6.0 00:06:12.992 LIB libspdk_bdev_xnvme.a 00:06:12.992 CC module/bdev/raid/concat.o 00:06:12.992 SO libspdk_bdev_xnvme.so.3.0 00:06:12.992 LIB libspdk_bdev_aio.a 00:06:12.992 SYMLINK libspdk_bdev_ftl.so 00:06:12.992 SO libspdk_bdev_aio.so.6.0 00:06:12.992 CC module/bdev/nvme/bdev_nvme_rpc.o 00:06:12.992 SYMLINK libspdk_bdev_xnvme.so 00:06:12.992 CC module/bdev/nvme/nvme_rpc.o 00:06:12.992 SYMLINK libspdk_bdev_aio.so 00:06:12.992 CC module/bdev/iscsi/bdev_iscsi.o 00:06:12.992 CC module/bdev/nvme/bdev_mdns_client.o 00:06:13.252 CC module/bdev/virtio/bdev_virtio_scsi.o 00:06:13.252 CC module/bdev/nvme/vbdev_opal.o 00:06:13.252 CC module/bdev/nvme/vbdev_opal_rpc.o 00:06:13.252 CC module/bdev/iscsi/bdev_iscsi_rpc.o 00:06:13.252 CC module/bdev/nvme/bdev_nvme_cuse_rpc.o 00:06:13.252 CC module/bdev/virtio/bdev_virtio_blk.o 00:06:13.252 LIB libspdk_bdev_raid.a 00:06:13.512 SO libspdk_bdev_raid.so.6.0 00:06:13.512 CC module/bdev/virtio/bdev_virtio_rpc.o 00:06:13.512 SYMLINK libspdk_bdev_raid.so 00:06:13.512 LIB libspdk_bdev_iscsi.a 00:06:13.512 SO libspdk_bdev_iscsi.so.6.0 00:06:13.512 SYMLINK libspdk_bdev_iscsi.so 00:06:13.772 LIB libspdk_bdev_virtio.a 00:06:13.772 SO libspdk_bdev_virtio.so.6.0 00:06:14.031 SYMLINK libspdk_bdev_virtio.so 00:06:15.037 LIB libspdk_bdev_nvme.a 00:06:15.295 SO libspdk_bdev_nvme.so.7.1 00:06:15.295 SYMLINK libspdk_bdev_nvme.so 00:06:16.251 CC module/event/subsystems/scheduler/scheduler.o 00:06:16.251 CC module/event/subsystems/vhost_blk/vhost_blk.o 00:06:16.251 CC module/event/subsystems/fsdev/fsdev.o 00:06:16.251 CC module/event/subsystems/vmd/vmd.o 00:06:16.251 CC module/event/subsystems/vmd/vmd_rpc.o 00:06:16.251 CC module/event/subsystems/sock/sock.o 00:06:16.251 CC module/event/subsystems/iobuf/iobuf_rpc.o 00:06:16.251 CC module/event/subsystems/iobuf/iobuf.o 00:06:16.251 CC module/event/subsystems/keyring/keyring.o 00:06:16.251 LIB libspdk_event_fsdev.a 00:06:16.251 LIB libspdk_event_keyring.a 00:06:16.251 LIB libspdk_event_scheduler.a 00:06:16.251 LIB libspdk_event_sock.a 00:06:16.251 SO libspdk_event_keyring.so.1.0 00:06:16.251 SO libspdk_event_fsdev.so.1.0 00:06:16.251 SO libspdk_event_scheduler.so.4.0 00:06:16.251 SO libspdk_event_sock.so.5.0 00:06:16.251 LIB libspdk_event_iobuf.a 00:06:16.251 LIB libspdk_event_vhost_blk.a 00:06:16.251 SO libspdk_event_vhost_blk.so.3.0 00:06:16.251 SYMLINK libspdk_event_keyring.so 00:06:16.252 SYMLINK libspdk_event_sock.so 00:06:16.252 SO libspdk_event_iobuf.so.3.0 00:06:16.252 SYMLINK libspdk_event_fsdev.so 00:06:16.252 LIB libspdk_event_vmd.a 00:06:16.252 SYMLINK libspdk_event_scheduler.so 00:06:16.252 SYMLINK libspdk_event_vhost_blk.so 00:06:16.252 SYMLINK libspdk_event_iobuf.so 00:06:16.252 SO libspdk_event_vmd.so.6.0 00:06:16.510 SYMLINK libspdk_event_vmd.so 00:06:16.768 CC module/event/subsystems/accel/accel.o 00:06:17.025 LIB libspdk_event_accel.a 00:06:17.025 SO libspdk_event_accel.so.6.0 00:06:17.025 SYMLINK libspdk_event_accel.so 00:06:17.590 CC module/event/subsystems/bdev/bdev.o 00:06:17.590 LIB libspdk_event_bdev.a 00:06:17.590 SO libspdk_event_bdev.so.6.0 00:06:17.848 SYMLINK libspdk_event_bdev.so 00:06:18.106 CC module/event/subsystems/scsi/scsi.o 00:06:18.106 CC module/event/subsystems/ublk/ublk.o 00:06:18.106 CC module/event/subsystems/nbd/nbd.o 00:06:18.106 CC module/event/subsystems/nvmf/nvmf_rpc.o 00:06:18.106 CC module/event/subsystems/nvmf/nvmf_tgt.o 00:06:18.364 LIB libspdk_event_scsi.a 00:06:18.364 LIB libspdk_event_ublk.a 00:06:18.364 LIB libspdk_event_nbd.a 00:06:18.364 SO libspdk_event_scsi.so.6.0 00:06:18.364 SO libspdk_event_ublk.so.3.0 00:06:18.364 SO libspdk_event_nbd.so.6.0 00:06:18.364 LIB libspdk_event_nvmf.a 00:06:18.364 SYMLINK libspdk_event_ublk.so 00:06:18.364 SYMLINK libspdk_event_scsi.so 00:06:18.364 SYMLINK libspdk_event_nbd.so 00:06:18.364 SO libspdk_event_nvmf.so.6.0 00:06:18.623 SYMLINK libspdk_event_nvmf.so 00:06:18.881 CC module/event/subsystems/iscsi/iscsi.o 00:06:18.881 CC module/event/subsystems/vhost_scsi/vhost_scsi.o 00:06:18.881 LIB libspdk_event_iscsi.a 00:06:18.881 LIB libspdk_event_vhost_scsi.a 00:06:19.139 SO libspdk_event_iscsi.so.6.0 00:06:19.139 SO libspdk_event_vhost_scsi.so.3.0 00:06:19.139 SYMLINK libspdk_event_vhost_scsi.so 00:06:19.139 SYMLINK libspdk_event_iscsi.so 00:06:19.398 SO libspdk.so.6.0 00:06:19.398 SYMLINK libspdk.so 00:06:19.657 TEST_HEADER include/spdk/accel.h 00:06:19.657 CC test/rpc_client/rpc_client_test.o 00:06:19.657 TEST_HEADER include/spdk/accel_module.h 00:06:19.657 TEST_HEADER include/spdk/assert.h 00:06:19.657 CXX app/trace/trace.o 00:06:19.657 TEST_HEADER include/spdk/barrier.h 00:06:19.657 TEST_HEADER include/spdk/base64.h 00:06:19.657 TEST_HEADER include/spdk/bdev.h 00:06:19.657 TEST_HEADER include/spdk/bdev_module.h 00:06:19.657 TEST_HEADER include/spdk/bdev_zone.h 00:06:19.657 TEST_HEADER include/spdk/bit_array.h 00:06:19.657 TEST_HEADER include/spdk/bit_pool.h 00:06:19.657 TEST_HEADER include/spdk/blob_bdev.h 00:06:19.657 TEST_HEADER include/spdk/blobfs_bdev.h 00:06:19.657 TEST_HEADER include/spdk/blobfs.h 00:06:19.657 TEST_HEADER include/spdk/blob.h 00:06:19.657 TEST_HEADER include/spdk/conf.h 00:06:19.657 CC examples/interrupt_tgt/interrupt_tgt.o 00:06:19.657 TEST_HEADER include/spdk/config.h 00:06:19.657 TEST_HEADER include/spdk/cpuset.h 00:06:19.657 TEST_HEADER include/spdk/crc16.h 00:06:19.657 TEST_HEADER include/spdk/crc32.h 00:06:19.657 TEST_HEADER include/spdk/crc64.h 00:06:19.657 TEST_HEADER include/spdk/dif.h 00:06:19.657 TEST_HEADER include/spdk/dma.h 00:06:19.657 TEST_HEADER include/spdk/endian.h 00:06:19.657 TEST_HEADER include/spdk/env_dpdk.h 00:06:19.657 TEST_HEADER include/spdk/env.h 00:06:19.657 TEST_HEADER include/spdk/event.h 00:06:19.657 TEST_HEADER include/spdk/fd_group.h 00:06:19.657 TEST_HEADER include/spdk/fd.h 00:06:19.657 TEST_HEADER include/spdk/file.h 00:06:19.657 TEST_HEADER include/spdk/fsdev.h 00:06:19.657 TEST_HEADER include/spdk/fsdev_module.h 00:06:19.657 CC examples/util/zipf/zipf.o 00:06:19.657 TEST_HEADER include/spdk/ftl.h 00:06:19.657 TEST_HEADER include/spdk/fuse_dispatcher.h 00:06:19.657 TEST_HEADER include/spdk/gpt_spec.h 00:06:19.657 TEST_HEADER include/spdk/hexlify.h 00:06:19.657 TEST_HEADER include/spdk/histogram_data.h 00:06:19.657 TEST_HEADER include/spdk/idxd.h 00:06:19.657 TEST_HEADER include/spdk/idxd_spec.h 00:06:19.657 TEST_HEADER include/spdk/init.h 00:06:19.657 TEST_HEADER include/spdk/ioat.h 00:06:19.657 TEST_HEADER include/spdk/ioat_spec.h 00:06:19.657 CC test/thread/poller_perf/poller_perf.o 00:06:19.657 TEST_HEADER include/spdk/iscsi_spec.h 00:06:19.657 TEST_HEADER include/spdk/json.h 00:06:19.657 CC examples/ioat/perf/perf.o 00:06:19.657 TEST_HEADER include/spdk/jsonrpc.h 00:06:19.657 TEST_HEADER include/spdk/keyring.h 00:06:19.657 TEST_HEADER include/spdk/keyring_module.h 00:06:19.657 TEST_HEADER include/spdk/likely.h 00:06:19.657 TEST_HEADER include/spdk/log.h 00:06:19.657 TEST_HEADER include/spdk/lvol.h 00:06:19.657 TEST_HEADER include/spdk/md5.h 00:06:19.657 TEST_HEADER include/spdk/memory.h 00:06:19.657 TEST_HEADER include/spdk/mmio.h 00:06:19.657 TEST_HEADER include/spdk/nbd.h 00:06:19.657 CC test/app/bdev_svc/bdev_svc.o 00:06:19.657 TEST_HEADER include/spdk/net.h 00:06:19.657 TEST_HEADER include/spdk/notify.h 00:06:19.657 TEST_HEADER include/spdk/nvme.h 00:06:19.657 TEST_HEADER include/spdk/nvme_intel.h 00:06:19.657 TEST_HEADER include/spdk/nvme_ocssd.h 00:06:19.657 TEST_HEADER include/spdk/nvme_ocssd_spec.h 00:06:19.916 TEST_HEADER include/spdk/nvme_spec.h 00:06:19.916 TEST_HEADER include/spdk/nvme_zns.h 00:06:19.916 TEST_HEADER include/spdk/nvmf_cmd.h 00:06:19.916 TEST_HEADER include/spdk/nvmf_fc_spec.h 00:06:19.916 TEST_HEADER include/spdk/nvmf.h 00:06:19.916 TEST_HEADER include/spdk/nvmf_spec.h 00:06:19.916 TEST_HEADER include/spdk/nvmf_transport.h 00:06:19.916 TEST_HEADER include/spdk/opal.h 00:06:19.916 TEST_HEADER include/spdk/opal_spec.h 00:06:19.916 TEST_HEADER include/spdk/pci_ids.h 00:06:19.916 TEST_HEADER include/spdk/pipe.h 00:06:19.916 TEST_HEADER include/spdk/queue.h 00:06:19.916 TEST_HEADER include/spdk/reduce.h 00:06:19.916 TEST_HEADER include/spdk/rpc.h 00:06:19.916 TEST_HEADER include/spdk/scheduler.h 00:06:19.916 TEST_HEADER include/spdk/scsi.h 00:06:19.916 TEST_HEADER include/spdk/scsi_spec.h 00:06:19.916 LINK rpc_client_test 00:06:19.916 TEST_HEADER include/spdk/sock.h 00:06:19.916 TEST_HEADER include/spdk/stdinc.h 00:06:19.916 CC test/dma/test_dma/test_dma.o 00:06:19.916 TEST_HEADER include/spdk/string.h 00:06:19.916 TEST_HEADER include/spdk/thread.h 00:06:19.916 CC test/env/mem_callbacks/mem_callbacks.o 00:06:19.916 TEST_HEADER include/spdk/trace.h 00:06:19.916 TEST_HEADER include/spdk/trace_parser.h 00:06:19.916 TEST_HEADER include/spdk/tree.h 00:06:19.916 TEST_HEADER include/spdk/ublk.h 00:06:19.916 TEST_HEADER include/spdk/util.h 00:06:19.916 TEST_HEADER include/spdk/uuid.h 00:06:19.916 TEST_HEADER include/spdk/version.h 00:06:19.916 TEST_HEADER include/spdk/vfio_user_pci.h 00:06:19.916 TEST_HEADER include/spdk/vfio_user_spec.h 00:06:19.916 TEST_HEADER include/spdk/vhost.h 00:06:19.916 TEST_HEADER include/spdk/vmd.h 00:06:19.916 TEST_HEADER include/spdk/xor.h 00:06:19.916 TEST_HEADER include/spdk/zipf.h 00:06:19.916 CXX test/cpp_headers/accel.o 00:06:19.916 LINK interrupt_tgt 00:06:19.916 LINK poller_perf 00:06:19.916 LINK zipf 00:06:19.916 LINK bdev_svc 00:06:19.916 LINK ioat_perf 00:06:19.916 CXX test/cpp_headers/accel_module.o 00:06:20.191 LINK spdk_trace 00:06:20.191 CC test/app/histogram_perf/histogram_perf.o 00:06:20.191 CC test/app/jsoncat/jsoncat.o 00:06:20.191 CXX test/cpp_headers/assert.o 00:06:20.191 CC test/app/fuzz/nvme_fuzz/nvme_fuzz.o 00:06:20.450 LINK histogram_perf 00:06:20.450 CC app/trace_record/trace_record.o 00:06:20.450 LINK jsoncat 00:06:20.450 CXX test/cpp_headers/barrier.o 00:06:20.450 CC examples/ioat/verify/verify.o 00:06:20.450 CC test/event/event_perf/event_perf.o 00:06:20.450 LINK test_dma 00:06:20.450 LINK mem_callbacks 00:06:20.450 CC examples/thread/thread/thread_ex.o 00:06:20.709 CXX test/cpp_headers/base64.o 00:06:20.709 LINK event_perf 00:06:20.709 LINK spdk_trace_record 00:06:20.709 CC examples/sock/hello_world/hello_sock.o 00:06:20.709 CXX test/cpp_headers/bdev.o 00:06:20.709 CC examples/vmd/lsvmd/lsvmd.o 00:06:20.709 LINK verify 00:06:20.709 CC test/env/vtophys/vtophys.o 00:06:20.709 LINK thread 00:06:20.969 LINK lsvmd 00:06:20.969 CXX test/cpp_headers/bdev_module.o 00:06:20.969 LINK vtophys 00:06:20.969 CC app/nvmf_tgt/nvmf_main.o 00:06:20.969 CC test/event/reactor/reactor.o 00:06:21.228 LINK hello_sock 00:06:21.228 CC test/accel/dif/dif.o 00:06:21.228 CXX test/cpp_headers/bdev_zone.o 00:06:21.228 LINK reactor 00:06:21.228 CC examples/vmd/led/led.o 00:06:21.228 CC test/blobfs/mkfs/mkfs.o 00:06:21.228 LINK nvmf_tgt 00:06:21.228 CC test/env/env_dpdk_post_init/env_dpdk_post_init.o 00:06:21.228 LINK nvme_fuzz 00:06:21.486 CXX test/cpp_headers/bit_array.o 00:06:21.486 LINK led 00:06:21.486 LINK mkfs 00:06:21.486 LINK env_dpdk_post_init 00:06:21.486 CC test/lvol/esnap/esnap.o 00:06:21.486 CC test/event/reactor_perf/reactor_perf.o 00:06:21.486 CC test/nvme/aer/aer.o 00:06:21.486 CXX test/cpp_headers/bit_pool.o 00:06:21.746 CC test/app/fuzz/iscsi_fuzz/iscsi_fuzz.o 00:06:21.746 CC app/iscsi_tgt/iscsi_tgt.o 00:06:21.746 CXX test/cpp_headers/blob_bdev.o 00:06:21.746 LINK reactor_perf 00:06:21.746 CC test/env/memory/memory_ut.o 00:06:21.746 CC examples/idxd/perf/perf.o 00:06:22.005 LINK aer 00:06:22.005 LINK iscsi_tgt 00:06:22.005 CC test/event/app_repeat/app_repeat.o 00:06:22.005 LINK dif 00:06:22.005 CXX test/cpp_headers/blobfs_bdev.o 00:06:22.005 CC examples/fsdev/hello_world/hello_fsdev.o 00:06:22.265 CC test/nvme/reset/reset.o 00:06:22.265 LINK app_repeat 00:06:22.265 CXX test/cpp_headers/blobfs.o 00:06:22.265 LINK idxd_perf 00:06:22.265 CC app/spdk_tgt/spdk_tgt.o 00:06:22.525 LINK hello_fsdev 00:06:22.525 CXX test/cpp_headers/blob.o 00:06:22.525 LINK reset 00:06:22.525 CC examples/accel/perf/accel_perf.o 00:06:22.525 CC test/event/scheduler/scheduler.o 00:06:22.525 LINK spdk_tgt 00:06:22.525 CXX test/cpp_headers/conf.o 00:06:22.786 CC test/bdev/bdevio/bdevio.o 00:06:22.786 CC test/nvme/sgl/sgl.o 00:06:22.786 LINK scheduler 00:06:22.786 CXX test/cpp_headers/config.o 00:06:22.786 CXX test/cpp_headers/cpuset.o 00:06:22.786 CC app/spdk_lspci/spdk_lspci.o 00:06:22.786 CC examples/blob/hello_world/hello_blob.o 00:06:23.045 LINK sgl 00:06:23.045 LINK spdk_lspci 00:06:23.045 LINK memory_ut 00:06:23.045 CXX test/cpp_headers/crc16.o 00:06:23.045 LINK accel_perf 00:06:23.045 LINK bdevio 00:06:23.045 LINK hello_blob 00:06:23.045 CC examples/nvme/hello_world/hello_world.o 00:06:23.303 CXX test/cpp_headers/crc32.o 00:06:23.303 CC app/spdk_nvme_perf/perf.o 00:06:23.303 CC test/nvme/e2edp/nvme_dp.o 00:06:23.303 CC examples/nvme/reconnect/reconnect.o 00:06:23.303 CC test/env/pci/pci_ut.o 00:06:23.303 CXX test/cpp_headers/crc64.o 00:06:23.561 LINK hello_world 00:06:23.561 CC examples/blob/cli/blobcli.o 00:06:23.561 CXX test/cpp_headers/dif.o 00:06:23.561 LINK nvme_dp 00:06:23.561 CXX test/cpp_headers/dma.o 00:06:23.561 CC examples/bdev/hello_world/hello_bdev.o 00:06:23.820 LINK iscsi_fuzz 00:06:23.820 LINK reconnect 00:06:23.820 CXX test/cpp_headers/endian.o 00:06:23.820 LINK pci_ut 00:06:23.820 CC test/nvme/overhead/overhead.o 00:06:23.820 CC examples/bdev/bdevperf/bdevperf.o 00:06:23.820 LINK hello_bdev 00:06:24.079 CXX test/cpp_headers/env_dpdk.o 00:06:24.079 CC examples/nvme/nvme_manage/nvme_manage.o 00:06:24.079 CC test/app/fuzz/vhost_fuzz/vhost_fuzz_rpc.o 00:06:24.079 LINK blobcli 00:06:24.079 CC test/app/fuzz/vhost_fuzz/vhost_fuzz.o 00:06:24.079 CXX test/cpp_headers/env.o 00:06:24.079 CXX test/cpp_headers/event.o 00:06:24.338 LINK overhead 00:06:24.338 LINK spdk_nvme_perf 00:06:24.338 CC examples/nvme/arbitration/arbitration.o 00:06:24.338 CXX test/cpp_headers/fd_group.o 00:06:24.338 CC examples/nvme/hotplug/hotplug.o 00:06:24.625 CC examples/nvme/cmb_copy/cmb_copy.o 00:06:24.625 CXX test/cpp_headers/fd.o 00:06:24.625 CC test/nvme/err_injection/err_injection.o 00:06:24.625 CC app/spdk_nvme_identify/identify.o 00:06:24.625 LINK vhost_fuzz 00:06:24.625 LINK nvme_manage 00:06:24.625 LINK arbitration 00:06:24.625 LINK cmb_copy 00:06:24.625 CXX test/cpp_headers/file.o 00:06:24.625 LINK err_injection 00:06:24.625 LINK hotplug 00:06:24.898 CXX test/cpp_headers/fsdev.o 00:06:24.898 LINK bdevperf 00:06:24.898 CC test/app/stub/stub.o 00:06:24.898 CC examples/nvme/abort/abort.o 00:06:24.898 CXX test/cpp_headers/fsdev_module.o 00:06:24.898 CC examples/nvme/pmr_persistence/pmr_persistence.o 00:06:24.898 CC app/spdk_nvme_discover/discovery_aer.o 00:06:24.898 CC test/nvme/startup/startup.o 00:06:25.159 LINK stub 00:06:25.159 CXX test/cpp_headers/ftl.o 00:06:25.159 LINK pmr_persistence 00:06:25.159 CC app/spdk_top/spdk_top.o 00:06:25.159 LINK spdk_nvme_discover 00:06:25.159 LINK startup 00:06:25.159 CC app/vhost/vhost.o 00:06:25.159 CXX test/cpp_headers/fuse_dispatcher.o 00:06:25.159 CXX test/cpp_headers/gpt_spec.o 00:06:25.418 LINK abort 00:06:25.418 CC test/nvme/reserve/reserve.o 00:06:25.418 CXX test/cpp_headers/hexlify.o 00:06:25.418 LINK vhost 00:06:25.418 CXX test/cpp_headers/histogram_data.o 00:06:25.418 CC app/spdk_dd/spdk_dd.o 00:06:25.418 LINK spdk_nvme_identify 00:06:25.418 CXX test/cpp_headers/idxd.o 00:06:25.683 LINK reserve 00:06:25.683 CC app/fio/nvme/fio_plugin.o 00:06:25.684 CXX test/cpp_headers/idxd_spec.o 00:06:25.684 CC examples/nvmf/nvmf/nvmf.o 00:06:25.684 CC app/fio/bdev/fio_plugin.o 00:06:25.684 CC test/nvme/simple_copy/simple_copy.o 00:06:25.684 CC test/nvme/connect_stress/connect_stress.o 00:06:25.943 CXX test/cpp_headers/init.o 00:06:25.943 CC test/nvme/boot_partition/boot_partition.o 00:06:25.943 LINK spdk_dd 00:06:25.943 CXX test/cpp_headers/ioat.o 00:06:25.943 LINK connect_stress 00:06:25.943 LINK simple_copy 00:06:25.943 LINK boot_partition 00:06:25.943 LINK nvmf 00:06:26.201 LINK spdk_top 00:06:26.201 CXX test/cpp_headers/ioat_spec.o 00:06:26.201 LINK spdk_nvme 00:06:26.201 CC test/nvme/compliance/nvme_compliance.o 00:06:26.201 CC test/nvme/fused_ordering/fused_ordering.o 00:06:26.201 CC test/nvme/doorbell_aers/doorbell_aers.o 00:06:26.201 CC test/nvme/fdp/fdp.o 00:06:26.201 CXX test/cpp_headers/iscsi_spec.o 00:06:26.459 CXX test/cpp_headers/json.o 00:06:26.459 CC test/nvme/cuse/cuse.o 00:06:26.459 LINK spdk_bdev 00:06:26.459 CXX test/cpp_headers/jsonrpc.o 00:06:26.459 CXX test/cpp_headers/keyring.o 00:06:26.459 CXX test/cpp_headers/keyring_module.o 00:06:26.459 LINK doorbell_aers 00:06:26.459 LINK fused_ordering 00:06:26.459 CXX test/cpp_headers/likely.o 00:06:26.459 CXX test/cpp_headers/log.o 00:06:26.718 LINK nvme_compliance 00:06:26.718 CXX test/cpp_headers/lvol.o 00:06:26.718 CXX test/cpp_headers/md5.o 00:06:26.718 CXX test/cpp_headers/memory.o 00:06:26.718 CXX test/cpp_headers/mmio.o 00:06:26.718 CXX test/cpp_headers/nbd.o 00:06:26.718 CXX test/cpp_headers/net.o 00:06:26.718 LINK fdp 00:06:26.718 CXX test/cpp_headers/notify.o 00:06:26.718 CXX test/cpp_headers/nvme.o 00:06:26.718 CXX test/cpp_headers/nvme_intel.o 00:06:26.718 CXX test/cpp_headers/nvme_ocssd.o 00:06:26.718 CXX test/cpp_headers/nvme_ocssd_spec.o 00:06:26.977 CXX test/cpp_headers/nvme_spec.o 00:06:26.977 CXX test/cpp_headers/nvme_zns.o 00:06:26.977 CXX test/cpp_headers/nvmf_fc_spec.o 00:06:26.977 CXX test/cpp_headers/nvmf_cmd.o 00:06:26.977 CXX test/cpp_headers/nvmf.o 00:06:26.977 CXX test/cpp_headers/nvmf_spec.o 00:06:26.977 CXX test/cpp_headers/nvmf_transport.o 00:06:26.977 CXX test/cpp_headers/opal.o 00:06:26.977 CXX test/cpp_headers/opal_spec.o 00:06:26.977 CXX test/cpp_headers/pci_ids.o 00:06:26.977 CXX test/cpp_headers/pipe.o 00:06:27.235 CXX test/cpp_headers/queue.o 00:06:27.235 CXX test/cpp_headers/reduce.o 00:06:27.235 CXX test/cpp_headers/rpc.o 00:06:27.235 CXX test/cpp_headers/scheduler.o 00:06:27.235 CXX test/cpp_headers/scsi.o 00:06:27.235 CXX test/cpp_headers/scsi_spec.o 00:06:27.235 CXX test/cpp_headers/sock.o 00:06:27.235 CXX test/cpp_headers/stdinc.o 00:06:27.235 CXX test/cpp_headers/string.o 00:06:27.235 CXX test/cpp_headers/thread.o 00:06:27.235 CXX test/cpp_headers/trace.o 00:06:27.235 CXX test/cpp_headers/trace_parser.o 00:06:27.493 CXX test/cpp_headers/tree.o 00:06:27.493 CXX test/cpp_headers/ublk.o 00:06:27.493 CXX test/cpp_headers/util.o 00:06:27.493 CXX test/cpp_headers/uuid.o 00:06:27.493 CXX test/cpp_headers/version.o 00:06:27.493 CXX test/cpp_headers/vfio_user_pci.o 00:06:27.493 CXX test/cpp_headers/vfio_user_spec.o 00:06:27.493 CXX test/cpp_headers/vhost.o 00:06:27.493 CXX test/cpp_headers/vmd.o 00:06:27.493 CXX test/cpp_headers/xor.o 00:06:27.493 CXX test/cpp_headers/zipf.o 00:06:27.752 LINK cuse 00:06:27.752 LINK esnap 00:06:28.319 00:06:28.319 real 1m29.221s 00:06:28.319 user 8m0.860s 00:06:28.319 sys 1m56.187s 00:06:28.319 18:04:38 make -- common/autotest_common.sh@1130 -- $ xtrace_disable 00:06:28.319 ************************************ 00:06:28.319 END TEST make 00:06:28.319 ************************************ 00:06:28.319 18:04:38 make -- common/autotest_common.sh@10 -- $ set +x 00:06:28.319 18:04:38 -- spdk/autobuild.sh@1 -- $ stop_monitor_resources 00:06:28.319 18:04:38 -- pm/common@29 -- $ signal_monitor_resources TERM 00:06:28.319 18:04:38 -- pm/common@40 -- $ local monitor pid pids signal=TERM 00:06:28.319 18:04:38 -- pm/common@42 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:06:28.319 18:04:38 -- pm/common@43 -- $ [[ -e /home/vagrant/spdk_repo/spdk/../output/power/collect-cpu-load.pid ]] 00:06:28.319 18:04:38 -- pm/common@44 -- $ pid=5295 00:06:28.319 18:04:38 -- pm/common@50 -- $ kill -TERM 5295 00:06:28.319 18:04:38 -- pm/common@42 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:06:28.319 18:04:38 -- pm/common@43 -- $ [[ -e /home/vagrant/spdk_repo/spdk/../output/power/collect-vmstat.pid ]] 00:06:28.319 18:04:38 -- pm/common@44 -- $ pid=5297 00:06:28.319 18:04:38 -- pm/common@50 -- $ kill -TERM 5297 00:06:28.319 18:04:38 -- spdk/autorun.sh@26 -- $ (( SPDK_TEST_UNITTEST == 1 || SPDK_RUN_FUNCTIONAL_TEST == 1 )) 00:06:28.319 18:04:38 -- spdk/autorun.sh@27 -- $ sudo -E /home/vagrant/spdk_repo/spdk/autotest.sh /home/vagrant/spdk_repo/autorun-spdk.conf 00:06:28.578 18:04:38 -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:06:28.578 18:04:38 -- common/autotest_common.sh@1711 -- # lcov --version 00:06:28.578 18:04:38 -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:06:28.578 18:04:38 -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:06:28.578 18:04:38 -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:06:28.578 18:04:38 -- scripts/common.sh@333 -- # local ver1 ver1_l 00:06:28.578 18:04:38 -- scripts/common.sh@334 -- # local ver2 ver2_l 00:06:28.578 18:04:38 -- scripts/common.sh@336 -- # IFS=.-: 00:06:28.578 18:04:38 -- scripts/common.sh@336 -- # read -ra ver1 00:06:28.578 18:04:38 -- scripts/common.sh@337 -- # IFS=.-: 00:06:28.578 18:04:38 -- scripts/common.sh@337 -- # read -ra ver2 00:06:28.578 18:04:38 -- scripts/common.sh@338 -- # local 'op=<' 00:06:28.578 18:04:38 -- scripts/common.sh@340 -- # ver1_l=2 00:06:28.578 18:04:38 -- scripts/common.sh@341 -- # ver2_l=1 00:06:28.578 18:04:38 -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:06:28.578 18:04:38 -- scripts/common.sh@344 -- # case "$op" in 00:06:28.578 18:04:38 -- scripts/common.sh@345 -- # : 1 00:06:28.578 18:04:38 -- scripts/common.sh@364 -- # (( v = 0 )) 00:06:28.578 18:04:38 -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:06:28.578 18:04:38 -- scripts/common.sh@365 -- # decimal 1 00:06:28.578 18:04:39 -- scripts/common.sh@353 -- # local d=1 00:06:28.578 18:04:39 -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:06:28.578 18:04:39 -- scripts/common.sh@355 -- # echo 1 00:06:28.578 18:04:39 -- scripts/common.sh@365 -- # ver1[v]=1 00:06:28.578 18:04:39 -- scripts/common.sh@366 -- # decimal 2 00:06:28.578 18:04:39 -- scripts/common.sh@353 -- # local d=2 00:06:28.578 18:04:39 -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:06:28.578 18:04:39 -- scripts/common.sh@355 -- # echo 2 00:06:28.578 18:04:39 -- scripts/common.sh@366 -- # ver2[v]=2 00:06:28.578 18:04:39 -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:06:28.578 18:04:39 -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:06:28.578 18:04:39 -- scripts/common.sh@368 -- # return 0 00:06:28.578 18:04:39 -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:06:28.578 18:04:39 -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:06:28.578 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:28.578 --rc genhtml_branch_coverage=1 00:06:28.578 --rc genhtml_function_coverage=1 00:06:28.578 --rc genhtml_legend=1 00:06:28.578 --rc geninfo_all_blocks=1 00:06:28.578 --rc geninfo_unexecuted_blocks=1 00:06:28.578 00:06:28.578 ' 00:06:28.578 18:04:39 -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:06:28.578 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:28.578 --rc genhtml_branch_coverage=1 00:06:28.578 --rc genhtml_function_coverage=1 00:06:28.578 --rc genhtml_legend=1 00:06:28.578 --rc geninfo_all_blocks=1 00:06:28.578 --rc geninfo_unexecuted_blocks=1 00:06:28.578 00:06:28.578 ' 00:06:28.578 18:04:39 -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:06:28.578 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:28.578 --rc genhtml_branch_coverage=1 00:06:28.578 --rc genhtml_function_coverage=1 00:06:28.578 --rc genhtml_legend=1 00:06:28.578 --rc geninfo_all_blocks=1 00:06:28.578 --rc geninfo_unexecuted_blocks=1 00:06:28.578 00:06:28.578 ' 00:06:28.578 18:04:39 -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:06:28.578 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:28.578 --rc genhtml_branch_coverage=1 00:06:28.578 --rc genhtml_function_coverage=1 00:06:28.578 --rc genhtml_legend=1 00:06:28.578 --rc geninfo_all_blocks=1 00:06:28.578 --rc geninfo_unexecuted_blocks=1 00:06:28.578 00:06:28.578 ' 00:06:28.578 18:04:39 -- spdk/autotest.sh@25 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:06:28.578 18:04:39 -- nvmf/common.sh@7 -- # uname -s 00:06:28.578 18:04:39 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:06:28.578 18:04:39 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:06:28.578 18:04:39 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:06:28.578 18:04:39 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:06:28.578 18:04:39 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:06:28.578 18:04:39 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:06:28.578 18:04:39 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:06:28.578 18:04:39 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:06:28.578 18:04:39 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:06:28.578 18:04:39 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:06:28.578 18:04:39 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:8fbcdf99-1d6c-4dcf-8c56-70f8c7f05438 00:06:28.578 18:04:39 -- nvmf/common.sh@18 -- # NVME_HOSTID=8fbcdf99-1d6c-4dcf-8c56-70f8c7f05438 00:06:28.578 18:04:39 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:06:28.578 18:04:39 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:06:28.578 18:04:39 -- nvmf/common.sh@21 -- # NET_TYPE=phy-fallback 00:06:28.578 18:04:39 -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:06:28.578 18:04:39 -- nvmf/common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:06:28.578 18:04:39 -- scripts/common.sh@15 -- # shopt -s extglob 00:06:28.578 18:04:39 -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:06:28.578 18:04:39 -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:06:28.578 18:04:39 -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:06:28.578 18:04:39 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:28.578 18:04:39 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:28.578 18:04:39 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:28.578 18:04:39 -- paths/export.sh@5 -- # export PATH 00:06:28.578 18:04:39 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:28.578 18:04:39 -- nvmf/common.sh@51 -- # : 0 00:06:28.578 18:04:39 -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:06:28.578 18:04:39 -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:06:28.578 18:04:39 -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:06:28.578 18:04:39 -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:06:28.578 18:04:39 -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:06:28.578 18:04:39 -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:06:28.578 /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:06:28.578 18:04:39 -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:06:28.578 18:04:39 -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:06:28.578 18:04:39 -- nvmf/common.sh@55 -- # have_pci_nics=0 00:06:28.578 18:04:39 -- spdk/autotest.sh@27 -- # '[' 0 -ne 0 ']' 00:06:28.578 18:04:39 -- spdk/autotest.sh@32 -- # uname -s 00:06:28.578 18:04:39 -- spdk/autotest.sh@32 -- # '[' Linux = Linux ']' 00:06:28.578 18:04:39 -- spdk/autotest.sh@33 -- # old_core_pattern='|/usr/lib/systemd/systemd-coredump %P %u %g %s %t %c %h' 00:06:28.578 18:04:39 -- spdk/autotest.sh@34 -- # mkdir -p /home/vagrant/spdk_repo/spdk/../output/coredumps 00:06:28.578 18:04:39 -- spdk/autotest.sh@39 -- # echo '|/home/vagrant/spdk_repo/spdk/scripts/core-collector.sh %P %s %t' 00:06:28.578 18:04:39 -- spdk/autotest.sh@40 -- # echo /home/vagrant/spdk_repo/spdk/../output/coredumps 00:06:28.578 18:04:39 -- spdk/autotest.sh@44 -- # modprobe nbd 00:06:28.578 18:04:39 -- spdk/autotest.sh@46 -- # type -P udevadm 00:06:28.578 18:04:39 -- spdk/autotest.sh@46 -- # udevadm=/usr/sbin/udevadm 00:06:28.578 18:04:39 -- spdk/autotest.sh@47 -- # /usr/sbin/udevadm monitor --property 00:06:28.578 18:04:39 -- spdk/autotest.sh@48 -- # udevadm_pid=54799 00:06:28.578 18:04:39 -- spdk/autotest.sh@53 -- # start_monitor_resources 00:06:28.578 18:04:39 -- pm/common@17 -- # local monitor 00:06:28.578 18:04:39 -- pm/common@19 -- # for monitor in "${MONITOR_RESOURCES[@]}" 00:06:28.578 18:04:39 -- pm/common@19 -- # for monitor in "${MONITOR_RESOURCES[@]}" 00:06:28.578 18:04:39 -- pm/common@25 -- # sleep 1 00:06:28.578 18:04:39 -- pm/common@21 -- # date +%s 00:06:28.578 18:04:39 -- pm/common@21 -- # date +%s 00:06:28.837 18:04:39 -- pm/common@21 -- # /home/vagrant/spdk_repo/spdk/scripts/perf/pm/collect-cpu-load -d /home/vagrant/spdk_repo/spdk/../output/power -l -p monitor.autotest.sh.1733508279 00:06:28.837 18:04:39 -- pm/common@21 -- # /home/vagrant/spdk_repo/spdk/scripts/perf/pm/collect-vmstat -d /home/vagrant/spdk_repo/spdk/../output/power -l -p monitor.autotest.sh.1733508279 00:06:28.837 Redirecting to /home/vagrant/spdk_repo/spdk/../output/power/monitor.autotest.sh.1733508279_collect-vmstat.pm.log 00:06:28.837 Redirecting to /home/vagrant/spdk_repo/spdk/../output/power/monitor.autotest.sh.1733508279_collect-cpu-load.pm.log 00:06:29.773 18:04:40 -- spdk/autotest.sh@55 -- # trap 'autotest_cleanup || :; exit 1' SIGINT SIGTERM EXIT 00:06:29.773 18:04:40 -- spdk/autotest.sh@57 -- # timing_enter autotest 00:06:29.773 18:04:40 -- common/autotest_common.sh@726 -- # xtrace_disable 00:06:29.773 18:04:40 -- common/autotest_common.sh@10 -- # set +x 00:06:29.773 18:04:40 -- spdk/autotest.sh@59 -- # create_test_list 00:06:29.773 18:04:40 -- common/autotest_common.sh@752 -- # xtrace_disable 00:06:29.773 18:04:40 -- common/autotest_common.sh@10 -- # set +x 00:06:29.773 18:04:40 -- spdk/autotest.sh@61 -- # dirname /home/vagrant/spdk_repo/spdk/autotest.sh 00:06:29.773 18:04:40 -- spdk/autotest.sh@61 -- # readlink -f /home/vagrant/spdk_repo/spdk 00:06:29.773 18:04:40 -- spdk/autotest.sh@61 -- # src=/home/vagrant/spdk_repo/spdk 00:06:29.773 18:04:40 -- spdk/autotest.sh@62 -- # out=/home/vagrant/spdk_repo/spdk/../output 00:06:29.773 18:04:40 -- spdk/autotest.sh@63 -- # cd /home/vagrant/spdk_repo/spdk 00:06:29.773 18:04:40 -- spdk/autotest.sh@65 -- # freebsd_update_contigmem_mod 00:06:29.773 18:04:40 -- common/autotest_common.sh@1457 -- # uname 00:06:29.773 18:04:40 -- common/autotest_common.sh@1457 -- # '[' Linux = FreeBSD ']' 00:06:29.773 18:04:40 -- spdk/autotest.sh@66 -- # freebsd_set_maxsock_buf 00:06:29.773 18:04:40 -- common/autotest_common.sh@1477 -- # uname 00:06:29.773 18:04:40 -- common/autotest_common.sh@1477 -- # [[ Linux = FreeBSD ]] 00:06:29.773 18:04:40 -- spdk/autotest.sh@68 -- # [[ y == y ]] 00:06:29.773 18:04:40 -- spdk/autotest.sh@70 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 --version 00:06:29.773 lcov: LCOV version 1.15 00:06:29.773 18:04:40 -- spdk/autotest.sh@72 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -c --no-external -i -t Baseline -d /home/vagrant/spdk_repo/spdk -o /home/vagrant/spdk_repo/spdk/../output/cov_base.info 00:06:47.856 /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_stubs.gcno:no functions found 00:06:47.856 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_stubs.gcno 00:07:02.736 18:05:11 -- spdk/autotest.sh@76 -- # timing_enter pre_cleanup 00:07:02.736 18:05:11 -- common/autotest_common.sh@726 -- # xtrace_disable 00:07:02.736 18:05:11 -- common/autotest_common.sh@10 -- # set +x 00:07:02.736 18:05:11 -- spdk/autotest.sh@78 -- # rm -f 00:07:02.736 18:05:11 -- spdk/autotest.sh@81 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:07:02.737 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:07:02.737 0000:00:11.0 (1b36 0010): Already using the nvme driver 00:07:02.737 0000:00:10.0 (1b36 0010): Already using the nvme driver 00:07:02.737 0000:00:12.0 (1b36 0010): Already using the nvme driver 00:07:02.737 0000:00:13.0 (1b36 0010): Already using the nvme driver 00:07:02.737 18:05:12 -- spdk/autotest.sh@83 -- # get_zoned_devs 00:07:02.737 18:05:12 -- common/autotest_common.sh@1657 -- # zoned_devs=() 00:07:02.737 18:05:12 -- common/autotest_common.sh@1657 -- # local -gA zoned_devs 00:07:02.737 18:05:12 -- common/autotest_common.sh@1658 -- # zoned_ctrls=() 00:07:02.737 18:05:12 -- common/autotest_common.sh@1658 -- # local -A zoned_ctrls 00:07:02.737 18:05:12 -- common/autotest_common.sh@1659 -- # local nvme bdf ns 00:07:02.737 18:05:12 -- common/autotest_common.sh@1668 -- # for nvme in /sys/class/nvme/nvme* 00:07:02.737 18:05:12 -- common/autotest_common.sh@1669 -- # bdf=0000:00:10.0 00:07:02.737 18:05:12 -- common/autotest_common.sh@1670 -- # for ns in "$nvme/"nvme*n* 00:07:02.737 18:05:12 -- common/autotest_common.sh@1671 -- # is_block_zoned nvme0n1 00:07:02.737 18:05:12 -- common/autotest_common.sh@1650 -- # local device=nvme0n1 00:07:02.737 18:05:12 -- common/autotest_common.sh@1652 -- # [[ -e /sys/block/nvme0n1/queue/zoned ]] 00:07:02.737 18:05:12 -- common/autotest_common.sh@1653 -- # [[ none != none ]] 00:07:02.737 18:05:12 -- common/autotest_common.sh@1668 -- # for nvme in /sys/class/nvme/nvme* 00:07:02.737 18:05:12 -- common/autotest_common.sh@1669 -- # bdf=0000:00:11.0 00:07:02.737 18:05:12 -- common/autotest_common.sh@1670 -- # for ns in "$nvme/"nvme*n* 00:07:02.737 18:05:12 -- common/autotest_common.sh@1671 -- # is_block_zoned nvme1n1 00:07:02.737 18:05:12 -- common/autotest_common.sh@1650 -- # local device=nvme1n1 00:07:02.737 18:05:12 -- common/autotest_common.sh@1652 -- # [[ -e /sys/block/nvme1n1/queue/zoned ]] 00:07:02.737 18:05:12 -- common/autotest_common.sh@1653 -- # [[ none != none ]] 00:07:02.737 18:05:12 -- common/autotest_common.sh@1668 -- # for nvme in /sys/class/nvme/nvme* 00:07:02.737 18:05:12 -- common/autotest_common.sh@1669 -- # bdf=0000:00:12.0 00:07:02.737 18:05:12 -- common/autotest_common.sh@1670 -- # for ns in "$nvme/"nvme*n* 00:07:02.737 18:05:12 -- common/autotest_common.sh@1671 -- # is_block_zoned nvme2n1 00:07:02.737 18:05:12 -- common/autotest_common.sh@1650 -- # local device=nvme2n1 00:07:02.737 18:05:12 -- common/autotest_common.sh@1652 -- # [[ -e /sys/block/nvme2n1/queue/zoned ]] 00:07:02.737 18:05:12 -- common/autotest_common.sh@1653 -- # [[ none != none ]] 00:07:02.737 18:05:12 -- common/autotest_common.sh@1670 -- # for ns in "$nvme/"nvme*n* 00:07:02.737 18:05:12 -- common/autotest_common.sh@1671 -- # is_block_zoned nvme2n2 00:07:02.737 18:05:12 -- common/autotest_common.sh@1650 -- # local device=nvme2n2 00:07:02.737 18:05:12 -- common/autotest_common.sh@1652 -- # [[ -e /sys/block/nvme2n2/queue/zoned ]] 00:07:02.737 18:05:12 -- common/autotest_common.sh@1653 -- # [[ none != none ]] 00:07:02.737 18:05:12 -- common/autotest_common.sh@1670 -- # for ns in "$nvme/"nvme*n* 00:07:02.737 18:05:12 -- common/autotest_common.sh@1671 -- # is_block_zoned nvme2n3 00:07:02.737 18:05:12 -- common/autotest_common.sh@1650 -- # local device=nvme2n3 00:07:02.737 18:05:12 -- common/autotest_common.sh@1652 -- # [[ -e /sys/block/nvme2n3/queue/zoned ]] 00:07:02.737 18:05:12 -- common/autotest_common.sh@1653 -- # [[ none != none ]] 00:07:02.737 18:05:12 -- common/autotest_common.sh@1668 -- # for nvme in /sys/class/nvme/nvme* 00:07:02.737 18:05:12 -- common/autotest_common.sh@1669 -- # bdf=0000:00:13.0 00:07:02.737 18:05:12 -- common/autotest_common.sh@1670 -- # for ns in "$nvme/"nvme*n* 00:07:02.737 18:05:12 -- common/autotest_common.sh@1671 -- # is_block_zoned nvme3c3n1 00:07:02.737 18:05:12 -- common/autotest_common.sh@1650 -- # local device=nvme3c3n1 00:07:02.737 18:05:12 -- common/autotest_common.sh@1652 -- # [[ -e /sys/block/nvme3c3n1/queue/zoned ]] 00:07:02.737 18:05:12 -- common/autotest_common.sh@1653 -- # [[ none != none ]] 00:07:02.737 18:05:12 -- spdk/autotest.sh@85 -- # (( 0 > 0 )) 00:07:02.737 18:05:12 -- spdk/autotest.sh@97 -- # for dev in /dev/nvme*n!(*p*) 00:07:02.737 18:05:12 -- spdk/autotest.sh@99 -- # [[ -z '' ]] 00:07:02.737 18:05:12 -- spdk/autotest.sh@100 -- # block_in_use /dev/nvme0n1 00:07:02.737 18:05:12 -- scripts/common.sh@381 -- # local block=/dev/nvme0n1 pt 00:07:02.737 18:05:12 -- scripts/common.sh@390 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py /dev/nvme0n1 00:07:02.737 No valid GPT data, bailing 00:07:02.737 18:05:12 -- scripts/common.sh@394 -- # blkid -s PTTYPE -o value /dev/nvme0n1 00:07:02.737 18:05:12 -- scripts/common.sh@394 -- # pt= 00:07:02.737 18:05:12 -- scripts/common.sh@395 -- # return 1 00:07:02.737 18:05:12 -- spdk/autotest.sh@101 -- # dd if=/dev/zero of=/dev/nvme0n1 bs=1M count=1 00:07:02.737 1+0 records in 00:07:02.737 1+0 records out 00:07:02.737 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0155271 s, 67.5 MB/s 00:07:02.737 18:05:12 -- spdk/autotest.sh@97 -- # for dev in /dev/nvme*n!(*p*) 00:07:02.737 18:05:12 -- spdk/autotest.sh@99 -- # [[ -z '' ]] 00:07:02.737 18:05:12 -- spdk/autotest.sh@100 -- # block_in_use /dev/nvme1n1 00:07:02.737 18:05:12 -- scripts/common.sh@381 -- # local block=/dev/nvme1n1 pt 00:07:02.737 18:05:12 -- scripts/common.sh@390 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py /dev/nvme1n1 00:07:02.737 No valid GPT data, bailing 00:07:02.737 18:05:13 -- scripts/common.sh@394 -- # blkid -s PTTYPE -o value /dev/nvme1n1 00:07:02.737 18:05:13 -- scripts/common.sh@394 -- # pt= 00:07:02.737 18:05:13 -- scripts/common.sh@395 -- # return 1 00:07:02.737 18:05:13 -- spdk/autotest.sh@101 -- # dd if=/dev/zero of=/dev/nvme1n1 bs=1M count=1 00:07:02.737 1+0 records in 00:07:02.737 1+0 records out 00:07:02.737 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00579734 s, 181 MB/s 00:07:02.737 18:05:13 -- spdk/autotest.sh@97 -- # for dev in /dev/nvme*n!(*p*) 00:07:02.737 18:05:13 -- spdk/autotest.sh@99 -- # [[ -z '' ]] 00:07:02.737 18:05:13 -- spdk/autotest.sh@100 -- # block_in_use /dev/nvme2n1 00:07:02.737 18:05:13 -- scripts/common.sh@381 -- # local block=/dev/nvme2n1 pt 00:07:02.737 18:05:13 -- scripts/common.sh@390 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py /dev/nvme2n1 00:07:02.737 No valid GPT data, bailing 00:07:02.737 18:05:13 -- scripts/common.sh@394 -- # blkid -s PTTYPE -o value /dev/nvme2n1 00:07:02.737 18:05:13 -- scripts/common.sh@394 -- # pt= 00:07:02.737 18:05:13 -- scripts/common.sh@395 -- # return 1 00:07:02.737 18:05:13 -- spdk/autotest.sh@101 -- # dd if=/dev/zero of=/dev/nvme2n1 bs=1M count=1 00:07:02.737 1+0 records in 00:07:02.737 1+0 records out 00:07:02.737 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00577237 s, 182 MB/s 00:07:02.737 18:05:13 -- spdk/autotest.sh@97 -- # for dev in /dev/nvme*n!(*p*) 00:07:02.737 18:05:13 -- spdk/autotest.sh@99 -- # [[ -z '' ]] 00:07:02.737 18:05:13 -- spdk/autotest.sh@100 -- # block_in_use /dev/nvme2n2 00:07:02.737 18:05:13 -- scripts/common.sh@381 -- # local block=/dev/nvme2n2 pt 00:07:02.737 18:05:13 -- scripts/common.sh@390 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py /dev/nvme2n2 00:07:02.737 No valid GPT data, bailing 00:07:02.737 18:05:13 -- scripts/common.sh@394 -- # blkid -s PTTYPE -o value /dev/nvme2n2 00:07:02.737 18:05:13 -- scripts/common.sh@394 -- # pt= 00:07:02.737 18:05:13 -- scripts/common.sh@395 -- # return 1 00:07:02.737 18:05:13 -- spdk/autotest.sh@101 -- # dd if=/dev/zero of=/dev/nvme2n2 bs=1M count=1 00:07:02.737 1+0 records in 00:07:02.737 1+0 records out 00:07:02.737 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0063995 s, 164 MB/s 00:07:02.737 18:05:13 -- spdk/autotest.sh@97 -- # for dev in /dev/nvme*n!(*p*) 00:07:02.737 18:05:13 -- spdk/autotest.sh@99 -- # [[ -z '' ]] 00:07:02.737 18:05:13 -- spdk/autotest.sh@100 -- # block_in_use /dev/nvme2n3 00:07:02.737 18:05:13 -- scripts/common.sh@381 -- # local block=/dev/nvme2n3 pt 00:07:02.737 18:05:13 -- scripts/common.sh@390 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py /dev/nvme2n3 00:07:02.737 No valid GPT data, bailing 00:07:02.737 18:05:13 -- scripts/common.sh@394 -- # blkid -s PTTYPE -o value /dev/nvme2n3 00:07:02.737 18:05:13 -- scripts/common.sh@394 -- # pt= 00:07:02.737 18:05:13 -- scripts/common.sh@395 -- # return 1 00:07:02.737 18:05:13 -- spdk/autotest.sh@101 -- # dd if=/dev/zero of=/dev/nvme2n3 bs=1M count=1 00:07:02.997 1+0 records in 00:07:02.997 1+0 records out 00:07:02.997 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00516157 s, 203 MB/s 00:07:02.997 18:05:13 -- spdk/autotest.sh@97 -- # for dev in /dev/nvme*n!(*p*) 00:07:02.997 18:05:13 -- spdk/autotest.sh@99 -- # [[ -z '' ]] 00:07:02.997 18:05:13 -- spdk/autotest.sh@100 -- # block_in_use /dev/nvme3n1 00:07:02.997 18:05:13 -- scripts/common.sh@381 -- # local block=/dev/nvme3n1 pt 00:07:02.997 18:05:13 -- scripts/common.sh@390 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py /dev/nvme3n1 00:07:02.997 No valid GPT data, bailing 00:07:02.997 18:05:13 -- scripts/common.sh@394 -- # blkid -s PTTYPE -o value /dev/nvme3n1 00:07:02.997 18:05:13 -- scripts/common.sh@394 -- # pt= 00:07:02.997 18:05:13 -- scripts/common.sh@395 -- # return 1 00:07:02.997 18:05:13 -- spdk/autotest.sh@101 -- # dd if=/dev/zero of=/dev/nvme3n1 bs=1M count=1 00:07:02.997 1+0 records in 00:07:02.997 1+0 records out 00:07:02.997 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00557738 s, 188 MB/s 00:07:02.997 18:05:13 -- spdk/autotest.sh@105 -- # sync 00:07:02.997 18:05:13 -- spdk/autotest.sh@107 -- # xtrace_disable_per_cmd reap_spdk_processes 00:07:02.997 18:05:13 -- common/autotest_common.sh@22 -- # eval 'reap_spdk_processes 12> /dev/null' 00:07:02.997 18:05:13 -- common/autotest_common.sh@22 -- # reap_spdk_processes 00:07:06.282 18:05:16 -- spdk/autotest.sh@111 -- # uname -s 00:07:06.282 18:05:16 -- spdk/autotest.sh@111 -- # [[ Linux == Linux ]] 00:07:06.282 18:05:16 -- spdk/autotest.sh@111 -- # [[ 0 -eq 1 ]] 00:07:06.282 18:05:16 -- spdk/autotest.sh@115 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh status 00:07:06.604 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:07:07.170 Hugepages 00:07:07.170 node hugesize free / total 00:07:07.170 node0 1048576kB 0 / 0 00:07:07.170 node0 2048kB 0 / 0 00:07:07.170 00:07:07.170 Type BDF Vendor Device NUMA Driver Device Block devices 00:07:07.170 virtio 0000:00:03.0 1af4 1001 unknown virtio-pci - vda 00:07:07.428 NVMe 0000:00:10.0 1b36 0010 unknown nvme nvme0 nvme0n1 00:07:07.428 NVMe 0000:00:11.0 1b36 0010 unknown nvme nvme1 nvme1n1 00:07:07.685 NVMe 0000:00:12.0 1b36 0010 unknown nvme nvme2 nvme2n1 nvme2n2 nvme2n3 00:07:07.685 NVMe 0000:00:13.0 1b36 0010 unknown nvme nvme3 nvme3n1 00:07:07.685 18:05:18 -- spdk/autotest.sh@117 -- # uname -s 00:07:07.685 18:05:18 -- spdk/autotest.sh@117 -- # [[ Linux == Linux ]] 00:07:07.685 18:05:18 -- spdk/autotest.sh@119 -- # nvme_namespace_revert 00:07:07.685 18:05:18 -- common/autotest_common.sh@1516 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:07:08.618 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:07:09.185 0000:00:10.0 (1b36 0010): nvme -> uio_pci_generic 00:07:09.185 0000:00:13.0 (1b36 0010): nvme -> uio_pci_generic 00:07:09.185 0000:00:11.0 (1b36 0010): nvme -> uio_pci_generic 00:07:09.185 0000:00:12.0 (1b36 0010): nvme -> uio_pci_generic 00:07:09.444 18:05:19 -- common/autotest_common.sh@1517 -- # sleep 1 00:07:10.380 18:05:20 -- common/autotest_common.sh@1518 -- # bdfs=() 00:07:10.380 18:05:20 -- common/autotest_common.sh@1518 -- # local bdfs 00:07:10.380 18:05:20 -- common/autotest_common.sh@1520 -- # bdfs=($(get_nvme_bdfs)) 00:07:10.380 18:05:20 -- common/autotest_common.sh@1520 -- # get_nvme_bdfs 00:07:10.380 18:05:20 -- common/autotest_common.sh@1498 -- # bdfs=() 00:07:10.380 18:05:20 -- common/autotest_common.sh@1498 -- # local bdfs 00:07:10.380 18:05:20 -- common/autotest_common.sh@1499 -- # bdfs=($("$rootdir/scripts/gen_nvme.sh" | jq -r '.config[].params.traddr')) 00:07:10.380 18:05:20 -- common/autotest_common.sh@1499 -- # /home/vagrant/spdk_repo/spdk/scripts/gen_nvme.sh 00:07:10.380 18:05:20 -- common/autotest_common.sh@1499 -- # jq -r '.config[].params.traddr' 00:07:10.380 18:05:20 -- common/autotest_common.sh@1500 -- # (( 4 == 0 )) 00:07:10.380 18:05:20 -- common/autotest_common.sh@1504 -- # printf '%s\n' 0000:00:10.0 0000:00:11.0 0000:00:12.0 0000:00:13.0 00:07:10.380 18:05:20 -- common/autotest_common.sh@1522 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:07:10.949 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:07:11.516 Waiting for block devices as requested 00:07:11.516 0000:00:11.0 (1b36 0010): uio_pci_generic -> nvme 00:07:11.516 0000:00:10.0 (1b36 0010): uio_pci_generic -> nvme 00:07:11.776 0000:00:12.0 (1b36 0010): uio_pci_generic -> nvme 00:07:11.776 0000:00:13.0 (1b36 0010): uio_pci_generic -> nvme 00:07:17.054 * Events for some block/disk devices (0000:00:13.0) were not caught, they may be missing 00:07:17.054 18:05:27 -- common/autotest_common.sh@1524 -- # for bdf in "${bdfs[@]}" 00:07:17.054 18:05:27 -- common/autotest_common.sh@1525 -- # get_nvme_ctrlr_from_bdf 0000:00:10.0 00:07:17.054 18:05:27 -- common/autotest_common.sh@1487 -- # readlink -f /sys/class/nvme/nvme0 /sys/class/nvme/nvme1 /sys/class/nvme/nvme2 /sys/class/nvme/nvme3 00:07:17.054 18:05:27 -- common/autotest_common.sh@1487 -- # grep 0000:00:10.0/nvme/nvme 00:07:17.054 18:05:27 -- common/autotest_common.sh@1487 -- # bdf_sysfs_path=/sys/devices/pci0000:00/0000:00:10.0/nvme/nvme1 00:07:17.054 18:05:27 -- common/autotest_common.sh@1488 -- # [[ -z /sys/devices/pci0000:00/0000:00:10.0/nvme/nvme1 ]] 00:07:17.054 18:05:27 -- common/autotest_common.sh@1492 -- # basename /sys/devices/pci0000:00/0000:00:10.0/nvme/nvme1 00:07:17.054 18:05:27 -- common/autotest_common.sh@1492 -- # printf '%s\n' nvme1 00:07:17.054 18:05:27 -- common/autotest_common.sh@1525 -- # nvme_ctrlr=/dev/nvme1 00:07:17.054 18:05:27 -- common/autotest_common.sh@1526 -- # [[ -z /dev/nvme1 ]] 00:07:17.054 18:05:27 -- common/autotest_common.sh@1531 -- # nvme id-ctrl /dev/nvme1 00:07:17.054 18:05:27 -- common/autotest_common.sh@1531 -- # grep oacs 00:07:17.054 18:05:27 -- common/autotest_common.sh@1531 -- # cut -d: -f2 00:07:17.054 18:05:27 -- common/autotest_common.sh@1531 -- # oacs=' 0x12a' 00:07:17.054 18:05:27 -- common/autotest_common.sh@1532 -- # oacs_ns_manage=8 00:07:17.054 18:05:27 -- common/autotest_common.sh@1534 -- # [[ 8 -ne 0 ]] 00:07:17.054 18:05:27 -- common/autotest_common.sh@1540 -- # grep unvmcap 00:07:17.054 18:05:27 -- common/autotest_common.sh@1540 -- # nvme id-ctrl /dev/nvme1 00:07:17.054 18:05:27 -- common/autotest_common.sh@1540 -- # cut -d: -f2 00:07:17.054 18:05:27 -- common/autotest_common.sh@1540 -- # unvmcap=' 0' 00:07:17.054 18:05:27 -- common/autotest_common.sh@1541 -- # [[ 0 -eq 0 ]] 00:07:17.054 18:05:27 -- common/autotest_common.sh@1543 -- # continue 00:07:17.054 18:05:27 -- common/autotest_common.sh@1524 -- # for bdf in "${bdfs[@]}" 00:07:17.054 18:05:27 -- common/autotest_common.sh@1525 -- # get_nvme_ctrlr_from_bdf 0000:00:11.0 00:07:17.055 18:05:27 -- common/autotest_common.sh@1487 -- # readlink -f /sys/class/nvme/nvme0 /sys/class/nvme/nvme1 /sys/class/nvme/nvme2 /sys/class/nvme/nvme3 00:07:17.055 18:05:27 -- common/autotest_common.sh@1487 -- # grep 0000:00:11.0/nvme/nvme 00:07:17.055 18:05:27 -- common/autotest_common.sh@1487 -- # bdf_sysfs_path=/sys/devices/pci0000:00/0000:00:11.0/nvme/nvme0 00:07:17.055 18:05:27 -- common/autotest_common.sh@1488 -- # [[ -z /sys/devices/pci0000:00/0000:00:11.0/nvme/nvme0 ]] 00:07:17.055 18:05:27 -- common/autotest_common.sh@1492 -- # basename /sys/devices/pci0000:00/0000:00:11.0/nvme/nvme0 00:07:17.055 18:05:27 -- common/autotest_common.sh@1492 -- # printf '%s\n' nvme0 00:07:17.055 18:05:27 -- common/autotest_common.sh@1525 -- # nvme_ctrlr=/dev/nvme0 00:07:17.055 18:05:27 -- common/autotest_common.sh@1526 -- # [[ -z /dev/nvme0 ]] 00:07:17.055 18:05:27 -- common/autotest_common.sh@1531 -- # nvme id-ctrl /dev/nvme0 00:07:17.055 18:05:27 -- common/autotest_common.sh@1531 -- # grep oacs 00:07:17.055 18:05:27 -- common/autotest_common.sh@1531 -- # cut -d: -f2 00:07:17.055 18:05:27 -- common/autotest_common.sh@1531 -- # oacs=' 0x12a' 00:07:17.055 18:05:27 -- common/autotest_common.sh@1532 -- # oacs_ns_manage=8 00:07:17.055 18:05:27 -- common/autotest_common.sh@1534 -- # [[ 8 -ne 0 ]] 00:07:17.055 18:05:27 -- common/autotest_common.sh@1540 -- # nvme id-ctrl /dev/nvme0 00:07:17.055 18:05:27 -- common/autotest_common.sh@1540 -- # grep unvmcap 00:07:17.055 18:05:27 -- common/autotest_common.sh@1540 -- # cut -d: -f2 00:07:17.055 18:05:27 -- common/autotest_common.sh@1540 -- # unvmcap=' 0' 00:07:17.055 18:05:27 -- common/autotest_common.sh@1541 -- # [[ 0 -eq 0 ]] 00:07:17.055 18:05:27 -- common/autotest_common.sh@1543 -- # continue 00:07:17.055 18:05:27 -- common/autotest_common.sh@1524 -- # for bdf in "${bdfs[@]}" 00:07:17.055 18:05:27 -- common/autotest_common.sh@1525 -- # get_nvme_ctrlr_from_bdf 0000:00:12.0 00:07:17.055 18:05:27 -- common/autotest_common.sh@1487 -- # readlink -f /sys/class/nvme/nvme0 /sys/class/nvme/nvme1 /sys/class/nvme/nvme2 /sys/class/nvme/nvme3 00:07:17.055 18:05:27 -- common/autotest_common.sh@1487 -- # grep 0000:00:12.0/nvme/nvme 00:07:17.055 18:05:27 -- common/autotest_common.sh@1487 -- # bdf_sysfs_path=/sys/devices/pci0000:00/0000:00:12.0/nvme/nvme2 00:07:17.055 18:05:27 -- common/autotest_common.sh@1488 -- # [[ -z /sys/devices/pci0000:00/0000:00:12.0/nvme/nvme2 ]] 00:07:17.055 18:05:27 -- common/autotest_common.sh@1492 -- # basename /sys/devices/pci0000:00/0000:00:12.0/nvme/nvme2 00:07:17.055 18:05:27 -- common/autotest_common.sh@1492 -- # printf '%s\n' nvme2 00:07:17.055 18:05:27 -- common/autotest_common.sh@1525 -- # nvme_ctrlr=/dev/nvme2 00:07:17.055 18:05:27 -- common/autotest_common.sh@1526 -- # [[ -z /dev/nvme2 ]] 00:07:17.055 18:05:27 -- common/autotest_common.sh@1531 -- # nvme id-ctrl /dev/nvme2 00:07:17.055 18:05:27 -- common/autotest_common.sh@1531 -- # grep oacs 00:07:17.055 18:05:27 -- common/autotest_common.sh@1531 -- # cut -d: -f2 00:07:17.055 18:05:27 -- common/autotest_common.sh@1531 -- # oacs=' 0x12a' 00:07:17.055 18:05:27 -- common/autotest_common.sh@1532 -- # oacs_ns_manage=8 00:07:17.055 18:05:27 -- common/autotest_common.sh@1534 -- # [[ 8 -ne 0 ]] 00:07:17.055 18:05:27 -- common/autotest_common.sh@1540 -- # grep unvmcap 00:07:17.055 18:05:27 -- common/autotest_common.sh@1540 -- # nvme id-ctrl /dev/nvme2 00:07:17.055 18:05:27 -- common/autotest_common.sh@1540 -- # cut -d: -f2 00:07:17.055 18:05:27 -- common/autotest_common.sh@1540 -- # unvmcap=' 0' 00:07:17.055 18:05:27 -- common/autotest_common.sh@1541 -- # [[ 0 -eq 0 ]] 00:07:17.055 18:05:27 -- common/autotest_common.sh@1543 -- # continue 00:07:17.055 18:05:27 -- common/autotest_common.sh@1524 -- # for bdf in "${bdfs[@]}" 00:07:17.055 18:05:27 -- common/autotest_common.sh@1525 -- # get_nvme_ctrlr_from_bdf 0000:00:13.0 00:07:17.055 18:05:27 -- common/autotest_common.sh@1487 -- # readlink -f /sys/class/nvme/nvme0 /sys/class/nvme/nvme1 /sys/class/nvme/nvme2 /sys/class/nvme/nvme3 00:07:17.055 18:05:27 -- common/autotest_common.sh@1487 -- # grep 0000:00:13.0/nvme/nvme 00:07:17.055 18:05:27 -- common/autotest_common.sh@1487 -- # bdf_sysfs_path=/sys/devices/pci0000:00/0000:00:13.0/nvme/nvme3 00:07:17.055 18:05:27 -- common/autotest_common.sh@1488 -- # [[ -z /sys/devices/pci0000:00/0000:00:13.0/nvme/nvme3 ]] 00:07:17.055 18:05:27 -- common/autotest_common.sh@1492 -- # basename /sys/devices/pci0000:00/0000:00:13.0/nvme/nvme3 00:07:17.055 18:05:27 -- common/autotest_common.sh@1492 -- # printf '%s\n' nvme3 00:07:17.055 18:05:27 -- common/autotest_common.sh@1525 -- # nvme_ctrlr=/dev/nvme3 00:07:17.055 18:05:27 -- common/autotest_common.sh@1526 -- # [[ -z /dev/nvme3 ]] 00:07:17.055 18:05:27 -- common/autotest_common.sh@1531 -- # nvme id-ctrl /dev/nvme3 00:07:17.055 18:05:27 -- common/autotest_common.sh@1531 -- # grep oacs 00:07:17.055 18:05:27 -- common/autotest_common.sh@1531 -- # cut -d: -f2 00:07:17.055 18:05:27 -- common/autotest_common.sh@1531 -- # oacs=' 0x12a' 00:07:17.055 18:05:27 -- common/autotest_common.sh@1532 -- # oacs_ns_manage=8 00:07:17.055 18:05:27 -- common/autotest_common.sh@1534 -- # [[ 8 -ne 0 ]] 00:07:17.055 18:05:27 -- common/autotest_common.sh@1540 -- # nvme id-ctrl /dev/nvme3 00:07:17.055 18:05:27 -- common/autotest_common.sh@1540 -- # grep unvmcap 00:07:17.055 18:05:27 -- common/autotest_common.sh@1540 -- # cut -d: -f2 00:07:17.055 18:05:27 -- common/autotest_common.sh@1540 -- # unvmcap=' 0' 00:07:17.055 18:05:27 -- common/autotest_common.sh@1541 -- # [[ 0 -eq 0 ]] 00:07:17.055 18:05:27 -- common/autotest_common.sh@1543 -- # continue 00:07:17.055 18:05:27 -- spdk/autotest.sh@122 -- # timing_exit pre_cleanup 00:07:17.055 18:05:27 -- common/autotest_common.sh@732 -- # xtrace_disable 00:07:17.055 18:05:27 -- common/autotest_common.sh@10 -- # set +x 00:07:17.055 18:05:27 -- spdk/autotest.sh@125 -- # timing_enter afterboot 00:07:17.055 18:05:27 -- common/autotest_common.sh@726 -- # xtrace_disable 00:07:17.055 18:05:27 -- common/autotest_common.sh@10 -- # set +x 00:07:17.055 18:05:27 -- spdk/autotest.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:07:17.994 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:07:18.584 0000:00:11.0 (1b36 0010): nvme -> uio_pci_generic 00:07:18.584 0000:00:10.0 (1b36 0010): nvme -> uio_pci_generic 00:07:18.584 0000:00:13.0 (1b36 0010): nvme -> uio_pci_generic 00:07:18.842 0000:00:12.0 (1b36 0010): nvme -> uio_pci_generic 00:07:18.842 18:05:29 -- spdk/autotest.sh@127 -- # timing_exit afterboot 00:07:18.842 18:05:29 -- common/autotest_common.sh@732 -- # xtrace_disable 00:07:18.842 18:05:29 -- common/autotest_common.sh@10 -- # set +x 00:07:18.842 18:05:29 -- spdk/autotest.sh@131 -- # opal_revert_cleanup 00:07:18.842 18:05:29 -- common/autotest_common.sh@1578 -- # mapfile -t bdfs 00:07:18.842 18:05:29 -- common/autotest_common.sh@1578 -- # get_nvme_bdfs_by_id 0x0a54 00:07:18.842 18:05:29 -- common/autotest_common.sh@1563 -- # bdfs=() 00:07:18.842 18:05:29 -- common/autotest_common.sh@1563 -- # _bdfs=() 00:07:18.842 18:05:29 -- common/autotest_common.sh@1563 -- # local bdfs _bdfs 00:07:18.842 18:05:29 -- common/autotest_common.sh@1564 -- # _bdfs=($(get_nvme_bdfs)) 00:07:18.842 18:05:29 -- common/autotest_common.sh@1564 -- # get_nvme_bdfs 00:07:18.842 18:05:29 -- common/autotest_common.sh@1498 -- # bdfs=() 00:07:18.842 18:05:29 -- common/autotest_common.sh@1498 -- # local bdfs 00:07:18.842 18:05:29 -- common/autotest_common.sh@1499 -- # bdfs=($("$rootdir/scripts/gen_nvme.sh" | jq -r '.config[].params.traddr')) 00:07:18.842 18:05:29 -- common/autotest_common.sh@1499 -- # /home/vagrant/spdk_repo/spdk/scripts/gen_nvme.sh 00:07:18.842 18:05:29 -- common/autotest_common.sh@1499 -- # jq -r '.config[].params.traddr' 00:07:19.100 18:05:29 -- common/autotest_common.sh@1500 -- # (( 4 == 0 )) 00:07:19.100 18:05:29 -- common/autotest_common.sh@1504 -- # printf '%s\n' 0000:00:10.0 0000:00:11.0 0000:00:12.0 0000:00:13.0 00:07:19.100 18:05:29 -- common/autotest_common.sh@1565 -- # for bdf in "${_bdfs[@]}" 00:07:19.100 18:05:29 -- common/autotest_common.sh@1566 -- # cat /sys/bus/pci/devices/0000:00:10.0/device 00:07:19.100 18:05:29 -- common/autotest_common.sh@1566 -- # device=0x0010 00:07:19.100 18:05:29 -- common/autotest_common.sh@1567 -- # [[ 0x0010 == \0\x\0\a\5\4 ]] 00:07:19.100 18:05:29 -- common/autotest_common.sh@1565 -- # for bdf in "${_bdfs[@]}" 00:07:19.100 18:05:29 -- common/autotest_common.sh@1566 -- # cat /sys/bus/pci/devices/0000:00:11.0/device 00:07:19.100 18:05:29 -- common/autotest_common.sh@1566 -- # device=0x0010 00:07:19.100 18:05:29 -- common/autotest_common.sh@1567 -- # [[ 0x0010 == \0\x\0\a\5\4 ]] 00:07:19.100 18:05:29 -- common/autotest_common.sh@1565 -- # for bdf in "${_bdfs[@]}" 00:07:19.101 18:05:29 -- common/autotest_common.sh@1566 -- # cat /sys/bus/pci/devices/0000:00:12.0/device 00:07:19.101 18:05:29 -- common/autotest_common.sh@1566 -- # device=0x0010 00:07:19.101 18:05:29 -- common/autotest_common.sh@1567 -- # [[ 0x0010 == \0\x\0\a\5\4 ]] 00:07:19.101 18:05:29 -- common/autotest_common.sh@1565 -- # for bdf in "${_bdfs[@]}" 00:07:19.101 18:05:29 -- common/autotest_common.sh@1566 -- # cat /sys/bus/pci/devices/0000:00:13.0/device 00:07:19.101 18:05:29 -- common/autotest_common.sh@1566 -- # device=0x0010 00:07:19.101 18:05:29 -- common/autotest_common.sh@1567 -- # [[ 0x0010 == \0\x\0\a\5\4 ]] 00:07:19.101 18:05:29 -- common/autotest_common.sh@1572 -- # (( 0 > 0 )) 00:07:19.101 18:05:29 -- common/autotest_common.sh@1572 -- # return 0 00:07:19.101 18:05:29 -- common/autotest_common.sh@1579 -- # [[ -z '' ]] 00:07:19.101 18:05:29 -- common/autotest_common.sh@1580 -- # return 0 00:07:19.101 18:05:29 -- spdk/autotest.sh@137 -- # '[' 0 -eq 1 ']' 00:07:19.101 18:05:29 -- spdk/autotest.sh@141 -- # '[' 1 -eq 1 ']' 00:07:19.101 18:05:29 -- spdk/autotest.sh@142 -- # [[ 0 -eq 1 ]] 00:07:19.101 18:05:29 -- spdk/autotest.sh@142 -- # [[ 0 -eq 1 ]] 00:07:19.101 18:05:29 -- spdk/autotest.sh@149 -- # timing_enter lib 00:07:19.101 18:05:29 -- common/autotest_common.sh@726 -- # xtrace_disable 00:07:19.101 18:05:29 -- common/autotest_common.sh@10 -- # set +x 00:07:19.101 18:05:29 -- spdk/autotest.sh@151 -- # [[ 0 -eq 1 ]] 00:07:19.101 18:05:29 -- spdk/autotest.sh@155 -- # run_test env /home/vagrant/spdk_repo/spdk/test/env/env.sh 00:07:19.101 18:05:29 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:07:19.101 18:05:29 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:07:19.101 18:05:29 -- common/autotest_common.sh@10 -- # set +x 00:07:19.101 ************************************ 00:07:19.101 START TEST env 00:07:19.101 ************************************ 00:07:19.101 18:05:29 env -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/env/env.sh 00:07:19.101 * Looking for test storage... 00:07:19.101 * Found test storage at /home/vagrant/spdk_repo/spdk/test/env 00:07:19.101 18:05:29 env -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:07:19.101 18:05:29 env -- common/autotest_common.sh@1711 -- # lcov --version 00:07:19.101 18:05:29 env -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:07:19.101 18:05:29 env -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:07:19.101 18:05:29 env -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:07:19.101 18:05:29 env -- scripts/common.sh@333 -- # local ver1 ver1_l 00:07:19.101 18:05:29 env -- scripts/common.sh@334 -- # local ver2 ver2_l 00:07:19.101 18:05:29 env -- scripts/common.sh@336 -- # IFS=.-: 00:07:19.101 18:05:29 env -- scripts/common.sh@336 -- # read -ra ver1 00:07:19.101 18:05:29 env -- scripts/common.sh@337 -- # IFS=.-: 00:07:19.101 18:05:29 env -- scripts/common.sh@337 -- # read -ra ver2 00:07:19.360 18:05:29 env -- scripts/common.sh@338 -- # local 'op=<' 00:07:19.360 18:05:29 env -- scripts/common.sh@340 -- # ver1_l=2 00:07:19.360 18:05:29 env -- scripts/common.sh@341 -- # ver2_l=1 00:07:19.360 18:05:29 env -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:07:19.360 18:05:29 env -- scripts/common.sh@344 -- # case "$op" in 00:07:19.360 18:05:29 env -- scripts/common.sh@345 -- # : 1 00:07:19.360 18:05:29 env -- scripts/common.sh@364 -- # (( v = 0 )) 00:07:19.360 18:05:29 env -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:07:19.360 18:05:29 env -- scripts/common.sh@365 -- # decimal 1 00:07:19.360 18:05:29 env -- scripts/common.sh@353 -- # local d=1 00:07:19.360 18:05:29 env -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:07:19.360 18:05:29 env -- scripts/common.sh@355 -- # echo 1 00:07:19.360 18:05:29 env -- scripts/common.sh@365 -- # ver1[v]=1 00:07:19.360 18:05:29 env -- scripts/common.sh@366 -- # decimal 2 00:07:19.360 18:05:29 env -- scripts/common.sh@353 -- # local d=2 00:07:19.360 18:05:29 env -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:07:19.360 18:05:29 env -- scripts/common.sh@355 -- # echo 2 00:07:19.360 18:05:29 env -- scripts/common.sh@366 -- # ver2[v]=2 00:07:19.360 18:05:29 env -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:07:19.360 18:05:29 env -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:07:19.360 18:05:29 env -- scripts/common.sh@368 -- # return 0 00:07:19.360 18:05:29 env -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:07:19.360 18:05:29 env -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:07:19.360 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:19.360 --rc genhtml_branch_coverage=1 00:07:19.360 --rc genhtml_function_coverage=1 00:07:19.360 --rc genhtml_legend=1 00:07:19.360 --rc geninfo_all_blocks=1 00:07:19.360 --rc geninfo_unexecuted_blocks=1 00:07:19.360 00:07:19.360 ' 00:07:19.360 18:05:29 env -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:07:19.360 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:19.360 --rc genhtml_branch_coverage=1 00:07:19.360 --rc genhtml_function_coverage=1 00:07:19.360 --rc genhtml_legend=1 00:07:19.360 --rc geninfo_all_blocks=1 00:07:19.360 --rc geninfo_unexecuted_blocks=1 00:07:19.360 00:07:19.360 ' 00:07:19.360 18:05:29 env -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:07:19.360 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:19.360 --rc genhtml_branch_coverage=1 00:07:19.360 --rc genhtml_function_coverage=1 00:07:19.360 --rc genhtml_legend=1 00:07:19.360 --rc geninfo_all_blocks=1 00:07:19.360 --rc geninfo_unexecuted_blocks=1 00:07:19.360 00:07:19.360 ' 00:07:19.360 18:05:29 env -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:07:19.360 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:19.360 --rc genhtml_branch_coverage=1 00:07:19.360 --rc genhtml_function_coverage=1 00:07:19.360 --rc genhtml_legend=1 00:07:19.360 --rc geninfo_all_blocks=1 00:07:19.360 --rc geninfo_unexecuted_blocks=1 00:07:19.360 00:07:19.360 ' 00:07:19.360 18:05:29 env -- env/env.sh@10 -- # run_test env_memory /home/vagrant/spdk_repo/spdk/test/env/memory/memory_ut 00:07:19.360 18:05:29 env -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:07:19.360 18:05:29 env -- common/autotest_common.sh@1111 -- # xtrace_disable 00:07:19.360 18:05:29 env -- common/autotest_common.sh@10 -- # set +x 00:07:19.360 ************************************ 00:07:19.360 START TEST env_memory 00:07:19.360 ************************************ 00:07:19.360 18:05:29 env.env_memory -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/env/memory/memory_ut 00:07:19.360 00:07:19.360 00:07:19.360 CUnit - A unit testing framework for C - Version 2.1-3 00:07:19.360 http://cunit.sourceforge.net/ 00:07:19.360 00:07:19.360 00:07:19.360 Suite: memory 00:07:19.360 Test: alloc and free memory map ...[2024-12-06 18:05:29.769653] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/memory.c: 283:spdk_mem_map_alloc: *ERROR*: Initial mem_map notify failed 00:07:19.360 passed 00:07:19.360 Test: mem map translation ...[2024-12-06 18:05:29.814546] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/memory.c: 595:spdk_mem_map_set_translation: *ERROR*: invalid spdk_mem_map_set_translation parameters, vaddr=2097152 len=1234 00:07:19.361 [2024-12-06 18:05:29.814620] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/memory.c: 595:spdk_mem_map_set_translation: *ERROR*: invalid spdk_mem_map_set_translation parameters, vaddr=1234 len=2097152 00:07:19.361 [2024-12-06 18:05:29.814691] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/memory.c: 589:spdk_mem_map_set_translation: *ERROR*: invalid usermode virtual address 281474976710656 00:07:19.361 [2024-12-06 18:05:29.814734] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/memory.c: 605:spdk_mem_map_set_translation: *ERROR*: could not get 0xffffffe00000 map 00:07:19.361 passed 00:07:19.361 Test: mem map registration ...[2024-12-06 18:05:29.882961] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/memory.c: 347:spdk_mem_register: *ERROR*: invalid spdk_mem_register parameters, vaddr=200000 len=1234 00:07:19.361 [2024-12-06 18:05:29.883031] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/memory.c: 347:spdk_mem_register: *ERROR*: invalid spdk_mem_register parameters, vaddr=4d2 len=2097152 00:07:19.361 passed 00:07:19.620 Test: mem map adjacent registrations ...passed 00:07:19.620 00:07:19.620 Run Summary: Type Total Ran Passed Failed Inactive 00:07:19.620 suites 1 1 n/a 0 0 00:07:19.620 tests 4 4 4 0 0 00:07:19.620 asserts 152 152 152 0 n/a 00:07:19.620 00:07:19.620 Elapsed time = 0.243 seconds 00:07:19.620 00:07:19.620 real 0m0.293s 00:07:19.620 user 0m0.249s 00:07:19.620 sys 0m0.036s 00:07:19.620 18:05:29 env.env_memory -- common/autotest_common.sh@1130 -- # xtrace_disable 00:07:19.620 18:05:29 env.env_memory -- common/autotest_common.sh@10 -- # set +x 00:07:19.620 ************************************ 00:07:19.620 END TEST env_memory 00:07:19.620 ************************************ 00:07:19.620 18:05:30 env -- env/env.sh@11 -- # run_test env_vtophys /home/vagrant/spdk_repo/spdk/test/env/vtophys/vtophys 00:07:19.620 18:05:30 env -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:07:19.620 18:05:30 env -- common/autotest_common.sh@1111 -- # xtrace_disable 00:07:19.620 18:05:30 env -- common/autotest_common.sh@10 -- # set +x 00:07:19.620 ************************************ 00:07:19.620 START TEST env_vtophys 00:07:19.620 ************************************ 00:07:19.620 18:05:30 env.env_vtophys -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/env/vtophys/vtophys 00:07:19.620 EAL: lib.eal log level changed from notice to debug 00:07:19.620 EAL: Detected lcore 0 as core 0 on socket 0 00:07:19.620 EAL: Detected lcore 1 as core 0 on socket 0 00:07:19.620 EAL: Detected lcore 2 as core 0 on socket 0 00:07:19.620 EAL: Detected lcore 3 as core 0 on socket 0 00:07:19.620 EAL: Detected lcore 4 as core 0 on socket 0 00:07:19.620 EAL: Detected lcore 5 as core 0 on socket 0 00:07:19.620 EAL: Detected lcore 6 as core 0 on socket 0 00:07:19.620 EAL: Detected lcore 7 as core 0 on socket 0 00:07:19.620 EAL: Detected lcore 8 as core 0 on socket 0 00:07:19.620 EAL: Detected lcore 9 as core 0 on socket 0 00:07:19.620 EAL: Maximum logical cores by configuration: 128 00:07:19.620 EAL: Detected CPU lcores: 10 00:07:19.620 EAL: Detected NUMA nodes: 1 00:07:19.620 EAL: Checking presence of .so 'librte_eal.so.24.1' 00:07:19.620 EAL: Detected shared linkage of DPDK 00:07:19.620 EAL: No shared files mode enabled, IPC will be disabled 00:07:19.620 EAL: Selected IOVA mode 'PA' 00:07:19.620 EAL: Probing VFIO support... 00:07:19.620 EAL: Module /sys/module/vfio not found! error 2 (No such file or directory) 00:07:19.620 EAL: VFIO modules not loaded, skipping VFIO support... 00:07:19.620 EAL: Ask a virtual area of 0x2e000 bytes 00:07:19.620 EAL: Virtual area found at 0x200000000000 (size = 0x2e000) 00:07:19.620 EAL: Setting up physically contiguous memory... 00:07:19.620 EAL: Setting maximum number of open files to 524288 00:07:19.620 EAL: Detected memory type: socket_id:0 hugepage_sz:2097152 00:07:19.620 EAL: Creating 4 segment lists: n_segs:8192 socket_id:0 hugepage_sz:2097152 00:07:19.620 EAL: Ask a virtual area of 0x61000 bytes 00:07:19.620 EAL: Virtual area found at 0x20000002e000 (size = 0x61000) 00:07:19.620 EAL: Memseg list allocated at socket 0, page size 0x800kB 00:07:19.620 EAL: Ask a virtual area of 0x400000000 bytes 00:07:19.620 EAL: Virtual area found at 0x200000200000 (size = 0x400000000) 00:07:19.620 EAL: VA reserved for memseg list at 0x200000200000, size 400000000 00:07:19.620 EAL: Ask a virtual area of 0x61000 bytes 00:07:19.620 EAL: Virtual area found at 0x200400200000 (size = 0x61000) 00:07:19.620 EAL: Memseg list allocated at socket 0, page size 0x800kB 00:07:19.620 EAL: Ask a virtual area of 0x400000000 bytes 00:07:19.620 EAL: Virtual area found at 0x200400400000 (size = 0x400000000) 00:07:19.620 EAL: VA reserved for memseg list at 0x200400400000, size 400000000 00:07:19.620 EAL: Ask a virtual area of 0x61000 bytes 00:07:19.620 EAL: Virtual area found at 0x200800400000 (size = 0x61000) 00:07:19.620 EAL: Memseg list allocated at socket 0, page size 0x800kB 00:07:19.620 EAL: Ask a virtual area of 0x400000000 bytes 00:07:19.620 EAL: Virtual area found at 0x200800600000 (size = 0x400000000) 00:07:19.620 EAL: VA reserved for memseg list at 0x200800600000, size 400000000 00:07:19.620 EAL: Ask a virtual area of 0x61000 bytes 00:07:19.620 EAL: Virtual area found at 0x200c00600000 (size = 0x61000) 00:07:19.620 EAL: Memseg list allocated at socket 0, page size 0x800kB 00:07:19.620 EAL: Ask a virtual area of 0x400000000 bytes 00:07:19.620 EAL: Virtual area found at 0x200c00800000 (size = 0x400000000) 00:07:19.620 EAL: VA reserved for memseg list at 0x200c00800000, size 400000000 00:07:19.620 EAL: Hugepages will be freed exactly as allocated. 00:07:19.620 EAL: No shared files mode enabled, IPC is disabled 00:07:19.620 EAL: No shared files mode enabled, IPC is disabled 00:07:19.878 EAL: TSC frequency is ~2490000 KHz 00:07:19.878 EAL: Main lcore 0 is ready (tid=7fe2b54d4a40;cpuset=[0]) 00:07:19.878 EAL: Trying to obtain current memory policy. 00:07:19.878 EAL: Setting policy MPOL_PREFERRED for socket 0 00:07:19.878 EAL: Restoring previous memory policy: 0 00:07:19.878 EAL: request: mp_malloc_sync 00:07:19.878 EAL: No shared files mode enabled, IPC is disabled 00:07:19.878 EAL: Heap on socket 0 was expanded by 2MB 00:07:19.878 EAL: Module /sys/module/vfio not found! error 2 (No such file or directory) 00:07:19.878 EAL: No PCI address specified using 'addr=' in: bus=pci 00:07:19.878 EAL: Mem event callback 'spdk:(nil)' registered 00:07:19.878 EAL: Module /sys/module/vfio_pci not found! error 2 (No such file or directory) 00:07:19.878 00:07:19.878 00:07:19.878 CUnit - A unit testing framework for C - Version 2.1-3 00:07:19.878 http://cunit.sourceforge.net/ 00:07:19.878 00:07:19.878 00:07:19.878 Suite: components_suite 00:07:20.443 Test: vtophys_malloc_test ...passed 00:07:20.443 Test: vtophys_spdk_malloc_test ...EAL: Trying to obtain current memory policy. 00:07:20.443 EAL: Setting policy MPOL_PREFERRED for socket 0 00:07:20.443 EAL: Restoring previous memory policy: 4 00:07:20.443 EAL: Calling mem event callback 'spdk:(nil)' 00:07:20.443 EAL: request: mp_malloc_sync 00:07:20.443 EAL: No shared files mode enabled, IPC is disabled 00:07:20.443 EAL: Heap on socket 0 was expanded by 4MB 00:07:20.443 EAL: Calling mem event callback 'spdk:(nil)' 00:07:20.443 EAL: request: mp_malloc_sync 00:07:20.443 EAL: No shared files mode enabled, IPC is disabled 00:07:20.443 EAL: Heap on socket 0 was shrunk by 4MB 00:07:20.443 EAL: Trying to obtain current memory policy. 00:07:20.443 EAL: Setting policy MPOL_PREFERRED for socket 0 00:07:20.443 EAL: Restoring previous memory policy: 4 00:07:20.443 EAL: Calling mem event callback 'spdk:(nil)' 00:07:20.443 EAL: request: mp_malloc_sync 00:07:20.443 EAL: No shared files mode enabled, IPC is disabled 00:07:20.443 EAL: Heap on socket 0 was expanded by 6MB 00:07:20.443 EAL: Calling mem event callback 'spdk:(nil)' 00:07:20.443 EAL: request: mp_malloc_sync 00:07:20.443 EAL: No shared files mode enabled, IPC is disabled 00:07:20.443 EAL: Heap on socket 0 was shrunk by 6MB 00:07:20.443 EAL: Trying to obtain current memory policy. 00:07:20.443 EAL: Setting policy MPOL_PREFERRED for socket 0 00:07:20.443 EAL: Restoring previous memory policy: 4 00:07:20.443 EAL: Calling mem event callback 'spdk:(nil)' 00:07:20.443 EAL: request: mp_malloc_sync 00:07:20.443 EAL: No shared files mode enabled, IPC is disabled 00:07:20.443 EAL: Heap on socket 0 was expanded by 10MB 00:07:20.443 EAL: Calling mem event callback 'spdk:(nil)' 00:07:20.443 EAL: request: mp_malloc_sync 00:07:20.443 EAL: No shared files mode enabled, IPC is disabled 00:07:20.443 EAL: Heap on socket 0 was shrunk by 10MB 00:07:20.443 EAL: Trying to obtain current memory policy. 00:07:20.443 EAL: Setting policy MPOL_PREFERRED for socket 0 00:07:20.443 EAL: Restoring previous memory policy: 4 00:07:20.443 EAL: Calling mem event callback 'spdk:(nil)' 00:07:20.443 EAL: request: mp_malloc_sync 00:07:20.443 EAL: No shared files mode enabled, IPC is disabled 00:07:20.443 EAL: Heap on socket 0 was expanded by 18MB 00:07:20.443 EAL: Calling mem event callback 'spdk:(nil)' 00:07:20.443 EAL: request: mp_malloc_sync 00:07:20.443 EAL: No shared files mode enabled, IPC is disabled 00:07:20.443 EAL: Heap on socket 0 was shrunk by 18MB 00:07:20.443 EAL: Trying to obtain current memory policy. 00:07:20.443 EAL: Setting policy MPOL_PREFERRED for socket 0 00:07:20.443 EAL: Restoring previous memory policy: 4 00:07:20.443 EAL: Calling mem event callback 'spdk:(nil)' 00:07:20.443 EAL: request: mp_malloc_sync 00:07:20.443 EAL: No shared files mode enabled, IPC is disabled 00:07:20.443 EAL: Heap on socket 0 was expanded by 34MB 00:07:20.443 EAL: Calling mem event callback 'spdk:(nil)' 00:07:20.701 EAL: request: mp_malloc_sync 00:07:20.701 EAL: No shared files mode enabled, IPC is disabled 00:07:20.701 EAL: Heap on socket 0 was shrunk by 34MB 00:07:20.701 EAL: Trying to obtain current memory policy. 00:07:20.701 EAL: Setting policy MPOL_PREFERRED for socket 0 00:07:20.701 EAL: Restoring previous memory policy: 4 00:07:20.701 EAL: Calling mem event callback 'spdk:(nil)' 00:07:20.701 EAL: request: mp_malloc_sync 00:07:20.701 EAL: No shared files mode enabled, IPC is disabled 00:07:20.701 EAL: Heap on socket 0 was expanded by 66MB 00:07:20.701 EAL: Calling mem event callback 'spdk:(nil)' 00:07:20.701 EAL: request: mp_malloc_sync 00:07:20.701 EAL: No shared files mode enabled, IPC is disabled 00:07:20.701 EAL: Heap on socket 0 was shrunk by 66MB 00:07:20.960 EAL: Trying to obtain current memory policy. 00:07:20.960 EAL: Setting policy MPOL_PREFERRED for socket 0 00:07:20.960 EAL: Restoring previous memory policy: 4 00:07:20.960 EAL: Calling mem event callback 'spdk:(nil)' 00:07:20.960 EAL: request: mp_malloc_sync 00:07:20.960 EAL: No shared files mode enabled, IPC is disabled 00:07:20.960 EAL: Heap on socket 0 was expanded by 130MB 00:07:21.220 EAL: Calling mem event callback 'spdk:(nil)' 00:07:21.220 EAL: request: mp_malloc_sync 00:07:21.220 EAL: No shared files mode enabled, IPC is disabled 00:07:21.220 EAL: Heap on socket 0 was shrunk by 130MB 00:07:21.478 EAL: Trying to obtain current memory policy. 00:07:21.478 EAL: Setting policy MPOL_PREFERRED for socket 0 00:07:21.479 EAL: Restoring previous memory policy: 4 00:07:21.479 EAL: Calling mem event callback 'spdk:(nil)' 00:07:21.479 EAL: request: mp_malloc_sync 00:07:21.479 EAL: No shared files mode enabled, IPC is disabled 00:07:21.479 EAL: Heap on socket 0 was expanded by 258MB 00:07:22.046 EAL: Calling mem event callback 'spdk:(nil)' 00:07:22.046 EAL: request: mp_malloc_sync 00:07:22.046 EAL: No shared files mode enabled, IPC is disabled 00:07:22.046 EAL: Heap on socket 0 was shrunk by 258MB 00:07:22.304 EAL: Trying to obtain current memory policy. 00:07:22.304 EAL: Setting policy MPOL_PREFERRED for socket 0 00:07:22.562 EAL: Restoring previous memory policy: 4 00:07:22.562 EAL: Calling mem event callback 'spdk:(nil)' 00:07:22.562 EAL: request: mp_malloc_sync 00:07:22.562 EAL: No shared files mode enabled, IPC is disabled 00:07:22.562 EAL: Heap on socket 0 was expanded by 514MB 00:07:23.498 EAL: Calling mem event callback 'spdk:(nil)' 00:07:23.498 EAL: request: mp_malloc_sync 00:07:23.498 EAL: No shared files mode enabled, IPC is disabled 00:07:23.498 EAL: Heap on socket 0 was shrunk by 514MB 00:07:24.431 EAL: Trying to obtain current memory policy. 00:07:24.431 EAL: Setting policy MPOL_PREFERRED for socket 0 00:07:24.690 EAL: Restoring previous memory policy: 4 00:07:24.691 EAL: Calling mem event callback 'spdk:(nil)' 00:07:24.691 EAL: request: mp_malloc_sync 00:07:24.691 EAL: No shared files mode enabled, IPC is disabled 00:07:24.691 EAL: Heap on socket 0 was expanded by 1026MB 00:07:26.632 EAL: Calling mem event callback 'spdk:(nil)' 00:07:26.632 EAL: request: mp_malloc_sync 00:07:26.633 EAL: No shared files mode enabled, IPC is disabled 00:07:26.633 EAL: Heap on socket 0 was shrunk by 1026MB 00:07:28.588 passed 00:07:28.588 00:07:28.588 Run Summary: Type Total Ran Passed Failed Inactive 00:07:28.588 suites 1 1 n/a 0 0 00:07:28.588 tests 2 2 2 0 0 00:07:28.588 asserts 5761 5761 5761 0 n/a 00:07:28.588 00:07:28.588 Elapsed time = 8.392 seconds 00:07:28.588 EAL: Calling mem event callback 'spdk:(nil)' 00:07:28.588 EAL: request: mp_malloc_sync 00:07:28.588 EAL: No shared files mode enabled, IPC is disabled 00:07:28.588 EAL: Heap on socket 0 was shrunk by 2MB 00:07:28.588 EAL: No shared files mode enabled, IPC is disabled 00:07:28.588 EAL: No shared files mode enabled, IPC is disabled 00:07:28.588 EAL: No shared files mode enabled, IPC is disabled 00:07:28.588 00:07:28.588 real 0m8.753s 00:07:28.588 user 0m7.646s 00:07:28.588 sys 0m0.936s 00:07:28.588 18:05:38 env.env_vtophys -- common/autotest_common.sh@1130 -- # xtrace_disable 00:07:28.588 ************************************ 00:07:28.588 END TEST env_vtophys 00:07:28.588 ************************************ 00:07:28.588 18:05:38 env.env_vtophys -- common/autotest_common.sh@10 -- # set +x 00:07:28.588 18:05:38 env -- env/env.sh@12 -- # run_test env_pci /home/vagrant/spdk_repo/spdk/test/env/pci/pci_ut 00:07:28.588 18:05:38 env -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:07:28.588 18:05:38 env -- common/autotest_common.sh@1111 -- # xtrace_disable 00:07:28.588 18:05:38 env -- common/autotest_common.sh@10 -- # set +x 00:07:28.588 ************************************ 00:07:28.588 START TEST env_pci 00:07:28.588 ************************************ 00:07:28.588 18:05:38 env.env_pci -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/env/pci/pci_ut 00:07:28.588 00:07:28.588 00:07:28.588 CUnit - A unit testing framework for C - Version 2.1-3 00:07:28.588 http://cunit.sourceforge.net/ 00:07:28.588 00:07:28.588 00:07:28.588 Suite: pci 00:07:28.588 Test: pci_hook ...[2024-12-06 18:05:38.937499] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/pci.c:1117:spdk_pci_device_claim: *ERROR*: Cannot create lock on device /var/tmp/spdk_pci_lock_10000:00:01.0, probably process 57665 has claimed it 00:07:28.588 EAL: Cannot find device (10000:00:01.0) 00:07:28.588 passed 00:07:28.588 00:07:28.588 Run Summary: Type Total Ran Passed Failed Inactive 00:07:28.588 suites 1 1 n/a 0 0 00:07:28.588 tests 1 1 1 0 0 00:07:28.588 asserts 25 25 25 0 n/a 00:07:28.588 00:07:28.588 Elapsed time = 0.013 seconds 00:07:28.588 EAL: Failed to attach device on primary process 00:07:28.588 00:07:28.588 real 0m0.122s 00:07:28.588 user 0m0.057s 00:07:28.588 sys 0m0.063s 00:07:28.588 18:05:39 env.env_pci -- common/autotest_common.sh@1130 -- # xtrace_disable 00:07:28.588 ************************************ 00:07:28.588 END TEST env_pci 00:07:28.588 ************************************ 00:07:28.588 18:05:39 env.env_pci -- common/autotest_common.sh@10 -- # set +x 00:07:28.588 18:05:39 env -- env/env.sh@14 -- # argv='-c 0x1 ' 00:07:28.588 18:05:39 env -- env/env.sh@15 -- # uname 00:07:28.588 18:05:39 env -- env/env.sh@15 -- # '[' Linux = Linux ']' 00:07:28.588 18:05:39 env -- env/env.sh@22 -- # argv+=--base-virtaddr=0x200000000000 00:07:28.588 18:05:39 env -- env/env.sh@24 -- # run_test env_dpdk_post_init /home/vagrant/spdk_repo/spdk/test/env/env_dpdk_post_init/env_dpdk_post_init -c 0x1 --base-virtaddr=0x200000000000 00:07:28.588 18:05:39 env -- common/autotest_common.sh@1105 -- # '[' 5 -le 1 ']' 00:07:28.588 18:05:39 env -- common/autotest_common.sh@1111 -- # xtrace_disable 00:07:28.588 18:05:39 env -- common/autotest_common.sh@10 -- # set +x 00:07:28.588 ************************************ 00:07:28.588 START TEST env_dpdk_post_init 00:07:28.588 ************************************ 00:07:28.588 18:05:39 env.env_dpdk_post_init -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/env/env_dpdk_post_init/env_dpdk_post_init -c 0x1 --base-virtaddr=0x200000000000 00:07:28.588 EAL: Detected CPU lcores: 10 00:07:28.588 EAL: Detected NUMA nodes: 1 00:07:28.847 EAL: Detected shared linkage of DPDK 00:07:28.847 EAL: Multi-process socket /var/run/dpdk/rte/mp_socket 00:07:28.847 EAL: Selected IOVA mode 'PA' 00:07:28.847 TELEMETRY: No legacy callbacks, legacy socket not created 00:07:28.847 EAL: Probe PCI driver: spdk_nvme (1b36:0010) device: 0000:00:10.0 (socket -1) 00:07:28.847 EAL: Probe PCI driver: spdk_nvme (1b36:0010) device: 0000:00:11.0 (socket -1) 00:07:28.847 EAL: Probe PCI driver: spdk_nvme (1b36:0010) device: 0000:00:12.0 (socket -1) 00:07:28.847 EAL: Probe PCI driver: spdk_nvme (1b36:0010) device: 0000:00:13.0 (socket -1) 00:07:28.847 Starting DPDK initialization... 00:07:28.847 Starting SPDK post initialization... 00:07:28.847 SPDK NVMe probe 00:07:28.847 Attaching to 0000:00:10.0 00:07:28.847 Attaching to 0000:00:11.0 00:07:28.847 Attaching to 0000:00:12.0 00:07:28.847 Attaching to 0000:00:13.0 00:07:28.847 Attached to 0000:00:10.0 00:07:28.847 Attached to 0000:00:11.0 00:07:28.847 Attached to 0000:00:13.0 00:07:28.847 Attached to 0000:00:12.0 00:07:28.847 Cleaning up... 00:07:28.847 00:07:28.847 real 0m0.315s 00:07:28.847 user 0m0.094s 00:07:28.847 sys 0m0.119s 00:07:28.847 18:05:39 env.env_dpdk_post_init -- common/autotest_common.sh@1130 -- # xtrace_disable 00:07:28.847 18:05:39 env.env_dpdk_post_init -- common/autotest_common.sh@10 -- # set +x 00:07:28.847 ************************************ 00:07:28.847 END TEST env_dpdk_post_init 00:07:28.847 ************************************ 00:07:29.107 18:05:39 env -- env/env.sh@26 -- # uname 00:07:29.107 18:05:39 env -- env/env.sh@26 -- # '[' Linux = Linux ']' 00:07:29.107 18:05:39 env -- env/env.sh@29 -- # run_test env_mem_callbacks /home/vagrant/spdk_repo/spdk/test/env/mem_callbacks/mem_callbacks 00:07:29.107 18:05:39 env -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:07:29.107 18:05:39 env -- common/autotest_common.sh@1111 -- # xtrace_disable 00:07:29.107 18:05:39 env -- common/autotest_common.sh@10 -- # set +x 00:07:29.107 ************************************ 00:07:29.107 START TEST env_mem_callbacks 00:07:29.107 ************************************ 00:07:29.107 18:05:39 env.env_mem_callbacks -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/env/mem_callbacks/mem_callbacks 00:07:29.107 EAL: Detected CPU lcores: 10 00:07:29.107 EAL: Detected NUMA nodes: 1 00:07:29.107 EAL: Detected shared linkage of DPDK 00:07:29.107 EAL: Multi-process socket /var/run/dpdk/rte/mp_socket 00:07:29.107 EAL: Selected IOVA mode 'PA' 00:07:29.107 00:07:29.107 00:07:29.107 CUnit - A unit testing framework for C - Version 2.1-3 00:07:29.107 http://cunit.sourceforge.net/ 00:07:29.107 00:07:29.107 00:07:29.107 Suite: memory 00:07:29.107 Test: test ... 00:07:29.107 register 0x200000200000 2097152 00:07:29.107 malloc 3145728 00:07:29.107 TELEMETRY: No legacy callbacks, legacy socket not created 00:07:29.107 register 0x200000400000 4194304 00:07:29.367 buf 0x2000004fffc0 len 3145728 PASSED 00:07:29.367 malloc 64 00:07:29.367 buf 0x2000004ffec0 len 64 PASSED 00:07:29.367 malloc 4194304 00:07:29.367 register 0x200000800000 6291456 00:07:29.367 buf 0x2000009fffc0 len 4194304 PASSED 00:07:29.367 free 0x2000004fffc0 3145728 00:07:29.367 free 0x2000004ffec0 64 00:07:29.367 unregister 0x200000400000 4194304 PASSED 00:07:29.367 free 0x2000009fffc0 4194304 00:07:29.367 unregister 0x200000800000 6291456 PASSED 00:07:29.367 malloc 8388608 00:07:29.367 register 0x200000400000 10485760 00:07:29.367 buf 0x2000005fffc0 len 8388608 PASSED 00:07:29.367 free 0x2000005fffc0 8388608 00:07:29.367 unregister 0x200000400000 10485760 PASSED 00:07:29.367 passed 00:07:29.367 00:07:29.367 Run Summary: Type Total Ran Passed Failed Inactive 00:07:29.367 suites 1 1 n/a 0 0 00:07:29.367 tests 1 1 1 0 0 00:07:29.367 asserts 15 15 15 0 n/a 00:07:29.367 00:07:29.367 Elapsed time = 0.081 seconds 00:07:29.367 00:07:29.367 real 0m0.294s 00:07:29.367 user 0m0.114s 00:07:29.367 sys 0m0.077s 00:07:29.367 18:05:39 env.env_mem_callbacks -- common/autotest_common.sh@1130 -- # xtrace_disable 00:07:29.367 ************************************ 00:07:29.367 END TEST env_mem_callbacks 00:07:29.367 ************************************ 00:07:29.367 18:05:39 env.env_mem_callbacks -- common/autotest_common.sh@10 -- # set +x 00:07:29.367 ************************************ 00:07:29.367 END TEST env 00:07:29.367 ************************************ 00:07:29.367 00:07:29.367 real 0m10.351s 00:07:29.367 user 0m8.380s 00:07:29.367 sys 0m1.590s 00:07:29.367 18:05:39 env -- common/autotest_common.sh@1130 -- # xtrace_disable 00:07:29.367 18:05:39 env -- common/autotest_common.sh@10 -- # set +x 00:07:29.367 18:05:39 -- spdk/autotest.sh@156 -- # run_test rpc /home/vagrant/spdk_repo/spdk/test/rpc/rpc.sh 00:07:29.367 18:05:39 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:07:29.367 18:05:39 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:07:29.367 18:05:39 -- common/autotest_common.sh@10 -- # set +x 00:07:29.367 ************************************ 00:07:29.367 START TEST rpc 00:07:29.367 ************************************ 00:07:29.367 18:05:39 rpc -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/rpc/rpc.sh 00:07:29.626 * Looking for test storage... 00:07:29.626 * Found test storage at /home/vagrant/spdk_repo/spdk/test/rpc 00:07:29.626 18:05:40 rpc -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:07:29.626 18:05:40 rpc -- common/autotest_common.sh@1711 -- # lcov --version 00:07:29.626 18:05:40 rpc -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:07:29.626 18:05:40 rpc -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:07:29.626 18:05:40 rpc -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:07:29.626 18:05:40 rpc -- scripts/common.sh@333 -- # local ver1 ver1_l 00:07:29.626 18:05:40 rpc -- scripts/common.sh@334 -- # local ver2 ver2_l 00:07:29.626 18:05:40 rpc -- scripts/common.sh@336 -- # IFS=.-: 00:07:29.626 18:05:40 rpc -- scripts/common.sh@336 -- # read -ra ver1 00:07:29.626 18:05:40 rpc -- scripts/common.sh@337 -- # IFS=.-: 00:07:29.626 18:05:40 rpc -- scripts/common.sh@337 -- # read -ra ver2 00:07:29.626 18:05:40 rpc -- scripts/common.sh@338 -- # local 'op=<' 00:07:29.626 18:05:40 rpc -- scripts/common.sh@340 -- # ver1_l=2 00:07:29.626 18:05:40 rpc -- scripts/common.sh@341 -- # ver2_l=1 00:07:29.626 18:05:40 rpc -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:07:29.626 18:05:40 rpc -- scripts/common.sh@344 -- # case "$op" in 00:07:29.626 18:05:40 rpc -- scripts/common.sh@345 -- # : 1 00:07:29.626 18:05:40 rpc -- scripts/common.sh@364 -- # (( v = 0 )) 00:07:29.626 18:05:40 rpc -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:07:29.627 18:05:40 rpc -- scripts/common.sh@365 -- # decimal 1 00:07:29.627 18:05:40 rpc -- scripts/common.sh@353 -- # local d=1 00:07:29.627 18:05:40 rpc -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:07:29.627 18:05:40 rpc -- scripts/common.sh@355 -- # echo 1 00:07:29.627 18:05:40 rpc -- scripts/common.sh@365 -- # ver1[v]=1 00:07:29.627 18:05:40 rpc -- scripts/common.sh@366 -- # decimal 2 00:07:29.627 18:05:40 rpc -- scripts/common.sh@353 -- # local d=2 00:07:29.627 18:05:40 rpc -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:07:29.627 18:05:40 rpc -- scripts/common.sh@355 -- # echo 2 00:07:29.627 18:05:40 rpc -- scripts/common.sh@366 -- # ver2[v]=2 00:07:29.627 18:05:40 rpc -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:07:29.627 18:05:40 rpc -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:07:29.627 18:05:40 rpc -- scripts/common.sh@368 -- # return 0 00:07:29.627 18:05:40 rpc -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:07:29.627 18:05:40 rpc -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:07:29.627 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:29.627 --rc genhtml_branch_coverage=1 00:07:29.627 --rc genhtml_function_coverage=1 00:07:29.627 --rc genhtml_legend=1 00:07:29.627 --rc geninfo_all_blocks=1 00:07:29.627 --rc geninfo_unexecuted_blocks=1 00:07:29.627 00:07:29.627 ' 00:07:29.627 18:05:40 rpc -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:07:29.627 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:29.627 --rc genhtml_branch_coverage=1 00:07:29.627 --rc genhtml_function_coverage=1 00:07:29.627 --rc genhtml_legend=1 00:07:29.627 --rc geninfo_all_blocks=1 00:07:29.627 --rc geninfo_unexecuted_blocks=1 00:07:29.627 00:07:29.627 ' 00:07:29.627 18:05:40 rpc -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:07:29.627 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:29.627 --rc genhtml_branch_coverage=1 00:07:29.627 --rc genhtml_function_coverage=1 00:07:29.627 --rc genhtml_legend=1 00:07:29.627 --rc geninfo_all_blocks=1 00:07:29.627 --rc geninfo_unexecuted_blocks=1 00:07:29.627 00:07:29.627 ' 00:07:29.627 18:05:40 rpc -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:07:29.627 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:29.627 --rc genhtml_branch_coverage=1 00:07:29.627 --rc genhtml_function_coverage=1 00:07:29.627 --rc genhtml_legend=1 00:07:29.627 --rc geninfo_all_blocks=1 00:07:29.627 --rc geninfo_unexecuted_blocks=1 00:07:29.627 00:07:29.627 ' 00:07:29.627 18:05:40 rpc -- rpc/rpc.sh@64 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -e bdev 00:07:29.627 18:05:40 rpc -- rpc/rpc.sh@65 -- # spdk_pid=57798 00:07:29.627 18:05:40 rpc -- rpc/rpc.sh@66 -- # trap 'killprocess $spdk_pid; exit 1' SIGINT SIGTERM EXIT 00:07:29.627 18:05:40 rpc -- rpc/rpc.sh@67 -- # waitforlisten 57798 00:07:29.627 18:05:40 rpc -- common/autotest_common.sh@835 -- # '[' -z 57798 ']' 00:07:29.627 18:05:40 rpc -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:29.627 18:05:40 rpc -- common/autotest_common.sh@840 -- # local max_retries=100 00:07:29.627 18:05:40 rpc -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:29.627 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:29.627 18:05:40 rpc -- common/autotest_common.sh@844 -- # xtrace_disable 00:07:29.627 18:05:40 rpc -- common/autotest_common.sh@10 -- # set +x 00:07:29.885 [2024-12-06 18:05:40.260588] Starting SPDK v25.01-pre git sha1 1148849d6 / DPDK 24.03.0 initialization... 00:07:29.885 [2024-12-06 18:05:40.261356] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid57798 ] 00:07:30.144 [2024-12-06 18:05:40.461087] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:30.144 [2024-12-06 18:05:40.578351] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask bdev specified. 00:07:30.144 [2024-12-06 18:05:40.578628] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s spdk_tgt -p 57798' to capture a snapshot of events at runtime. 00:07:30.144 [2024-12-06 18:05:40.578734] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:07:30.144 [2024-12-06 18:05:40.578790] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:07:30.144 [2024-12-06 18:05:40.578820] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/spdk_tgt_trace.pid57798 for offline analysis/debug. 00:07:30.144 [2024-12-06 18:05:40.580307] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:07:31.080 18:05:41 rpc -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:07:31.080 18:05:41 rpc -- common/autotest_common.sh@868 -- # return 0 00:07:31.080 18:05:41 rpc -- rpc/rpc.sh@69 -- # export PYTHONPATH=:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/test/rpc 00:07:31.080 18:05:41 rpc -- rpc/rpc.sh@69 -- # PYTHONPATH=:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/test/rpc 00:07:31.080 18:05:41 rpc -- rpc/rpc.sh@72 -- # rpc=rpc_cmd 00:07:31.080 18:05:41 rpc -- rpc/rpc.sh@73 -- # run_test rpc_integrity rpc_integrity 00:07:31.080 18:05:41 rpc -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:07:31.080 18:05:41 rpc -- common/autotest_common.sh@1111 -- # xtrace_disable 00:07:31.080 18:05:41 rpc -- common/autotest_common.sh@10 -- # set +x 00:07:31.080 ************************************ 00:07:31.080 START TEST rpc_integrity 00:07:31.080 ************************************ 00:07:31.080 18:05:41 rpc.rpc_integrity -- common/autotest_common.sh@1129 -- # rpc_integrity 00:07:31.080 18:05:41 rpc.rpc_integrity -- rpc/rpc.sh@12 -- # rpc_cmd bdev_get_bdevs 00:07:31.080 18:05:41 rpc.rpc_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:31.080 18:05:41 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:07:31.080 18:05:41 rpc.rpc_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:31.080 18:05:41 rpc.rpc_integrity -- rpc/rpc.sh@12 -- # bdevs='[]' 00:07:31.080 18:05:41 rpc.rpc_integrity -- rpc/rpc.sh@13 -- # jq length 00:07:31.080 18:05:41 rpc.rpc_integrity -- rpc/rpc.sh@13 -- # '[' 0 == 0 ']' 00:07:31.080 18:05:41 rpc.rpc_integrity -- rpc/rpc.sh@15 -- # rpc_cmd bdev_malloc_create 8 512 00:07:31.080 18:05:41 rpc.rpc_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:31.080 18:05:41 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:07:31.080 18:05:41 rpc.rpc_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:31.080 18:05:41 rpc.rpc_integrity -- rpc/rpc.sh@15 -- # malloc=Malloc0 00:07:31.080 18:05:41 rpc.rpc_integrity -- rpc/rpc.sh@16 -- # rpc_cmd bdev_get_bdevs 00:07:31.080 18:05:41 rpc.rpc_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:31.080 18:05:41 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:07:31.080 18:05:41 rpc.rpc_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:31.080 18:05:41 rpc.rpc_integrity -- rpc/rpc.sh@16 -- # bdevs='[ 00:07:31.080 { 00:07:31.080 "name": "Malloc0", 00:07:31.080 "aliases": [ 00:07:31.080 "ea8ec7f8-b7b0-4b9f-ac3c-51e8a2cdcca1" 00:07:31.080 ], 00:07:31.080 "product_name": "Malloc disk", 00:07:31.080 "block_size": 512, 00:07:31.080 "num_blocks": 16384, 00:07:31.080 "uuid": "ea8ec7f8-b7b0-4b9f-ac3c-51e8a2cdcca1", 00:07:31.080 "assigned_rate_limits": { 00:07:31.080 "rw_ios_per_sec": 0, 00:07:31.080 "rw_mbytes_per_sec": 0, 00:07:31.080 "r_mbytes_per_sec": 0, 00:07:31.080 "w_mbytes_per_sec": 0 00:07:31.080 }, 00:07:31.080 "claimed": false, 00:07:31.080 "zoned": false, 00:07:31.080 "supported_io_types": { 00:07:31.080 "read": true, 00:07:31.080 "write": true, 00:07:31.080 "unmap": true, 00:07:31.080 "flush": true, 00:07:31.080 "reset": true, 00:07:31.080 "nvme_admin": false, 00:07:31.080 "nvme_io": false, 00:07:31.080 "nvme_io_md": false, 00:07:31.080 "write_zeroes": true, 00:07:31.080 "zcopy": true, 00:07:31.080 "get_zone_info": false, 00:07:31.080 "zone_management": false, 00:07:31.080 "zone_append": false, 00:07:31.080 "compare": false, 00:07:31.080 "compare_and_write": false, 00:07:31.080 "abort": true, 00:07:31.080 "seek_hole": false, 00:07:31.080 "seek_data": false, 00:07:31.080 "copy": true, 00:07:31.080 "nvme_iov_md": false 00:07:31.080 }, 00:07:31.080 "memory_domains": [ 00:07:31.080 { 00:07:31.080 "dma_device_id": "system", 00:07:31.080 "dma_device_type": 1 00:07:31.080 }, 00:07:31.080 { 00:07:31.080 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:07:31.080 "dma_device_type": 2 00:07:31.080 } 00:07:31.080 ], 00:07:31.080 "driver_specific": {} 00:07:31.080 } 00:07:31.080 ]' 00:07:31.080 18:05:41 rpc.rpc_integrity -- rpc/rpc.sh@17 -- # jq length 00:07:31.339 18:05:41 rpc.rpc_integrity -- rpc/rpc.sh@17 -- # '[' 1 == 1 ']' 00:07:31.339 18:05:41 rpc.rpc_integrity -- rpc/rpc.sh@19 -- # rpc_cmd bdev_passthru_create -b Malloc0 -p Passthru0 00:07:31.339 18:05:41 rpc.rpc_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:31.339 18:05:41 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:07:31.339 [2024-12-06 18:05:41.669428] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on Malloc0 00:07:31.339 [2024-12-06 18:05:41.669537] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:07:31.339 [2024-12-06 18:05:41.669582] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008480 00:07:31.339 [2024-12-06 18:05:41.669606] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:07:31.339 [2024-12-06 18:05:41.673246] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:07:31.339 [2024-12-06 18:05:41.673347] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: Passthru0 00:07:31.340 Passthru0 00:07:31.340 18:05:41 rpc.rpc_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:31.340 18:05:41 rpc.rpc_integrity -- rpc/rpc.sh@20 -- # rpc_cmd bdev_get_bdevs 00:07:31.340 18:05:41 rpc.rpc_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:31.340 18:05:41 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:07:31.340 18:05:41 rpc.rpc_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:31.340 18:05:41 rpc.rpc_integrity -- rpc/rpc.sh@20 -- # bdevs='[ 00:07:31.340 { 00:07:31.340 "name": "Malloc0", 00:07:31.340 "aliases": [ 00:07:31.340 "ea8ec7f8-b7b0-4b9f-ac3c-51e8a2cdcca1" 00:07:31.340 ], 00:07:31.340 "product_name": "Malloc disk", 00:07:31.340 "block_size": 512, 00:07:31.340 "num_blocks": 16384, 00:07:31.340 "uuid": "ea8ec7f8-b7b0-4b9f-ac3c-51e8a2cdcca1", 00:07:31.340 "assigned_rate_limits": { 00:07:31.340 "rw_ios_per_sec": 0, 00:07:31.340 "rw_mbytes_per_sec": 0, 00:07:31.340 "r_mbytes_per_sec": 0, 00:07:31.340 "w_mbytes_per_sec": 0 00:07:31.340 }, 00:07:31.340 "claimed": true, 00:07:31.340 "claim_type": "exclusive_write", 00:07:31.340 "zoned": false, 00:07:31.340 "supported_io_types": { 00:07:31.340 "read": true, 00:07:31.340 "write": true, 00:07:31.340 "unmap": true, 00:07:31.340 "flush": true, 00:07:31.340 "reset": true, 00:07:31.340 "nvme_admin": false, 00:07:31.340 "nvme_io": false, 00:07:31.340 "nvme_io_md": false, 00:07:31.340 "write_zeroes": true, 00:07:31.340 "zcopy": true, 00:07:31.340 "get_zone_info": false, 00:07:31.340 "zone_management": false, 00:07:31.340 "zone_append": false, 00:07:31.340 "compare": false, 00:07:31.340 "compare_and_write": false, 00:07:31.340 "abort": true, 00:07:31.340 "seek_hole": false, 00:07:31.340 "seek_data": false, 00:07:31.340 "copy": true, 00:07:31.340 "nvme_iov_md": false 00:07:31.340 }, 00:07:31.340 "memory_domains": [ 00:07:31.340 { 00:07:31.340 "dma_device_id": "system", 00:07:31.340 "dma_device_type": 1 00:07:31.340 }, 00:07:31.340 { 00:07:31.340 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:07:31.340 "dma_device_type": 2 00:07:31.340 } 00:07:31.340 ], 00:07:31.340 "driver_specific": {} 00:07:31.340 }, 00:07:31.340 { 00:07:31.340 "name": "Passthru0", 00:07:31.340 "aliases": [ 00:07:31.340 "80460568-cc4e-539f-a78c-f08e41c249c1" 00:07:31.340 ], 00:07:31.340 "product_name": "passthru", 00:07:31.340 "block_size": 512, 00:07:31.340 "num_blocks": 16384, 00:07:31.340 "uuid": "80460568-cc4e-539f-a78c-f08e41c249c1", 00:07:31.340 "assigned_rate_limits": { 00:07:31.340 "rw_ios_per_sec": 0, 00:07:31.340 "rw_mbytes_per_sec": 0, 00:07:31.340 "r_mbytes_per_sec": 0, 00:07:31.340 "w_mbytes_per_sec": 0 00:07:31.340 }, 00:07:31.340 "claimed": false, 00:07:31.340 "zoned": false, 00:07:31.340 "supported_io_types": { 00:07:31.340 "read": true, 00:07:31.340 "write": true, 00:07:31.340 "unmap": true, 00:07:31.340 "flush": true, 00:07:31.340 "reset": true, 00:07:31.340 "nvme_admin": false, 00:07:31.340 "nvme_io": false, 00:07:31.340 "nvme_io_md": false, 00:07:31.340 "write_zeroes": true, 00:07:31.340 "zcopy": true, 00:07:31.340 "get_zone_info": false, 00:07:31.340 "zone_management": false, 00:07:31.340 "zone_append": false, 00:07:31.340 "compare": false, 00:07:31.340 "compare_and_write": false, 00:07:31.340 "abort": true, 00:07:31.340 "seek_hole": false, 00:07:31.340 "seek_data": false, 00:07:31.340 "copy": true, 00:07:31.340 "nvme_iov_md": false 00:07:31.340 }, 00:07:31.340 "memory_domains": [ 00:07:31.340 { 00:07:31.340 "dma_device_id": "system", 00:07:31.340 "dma_device_type": 1 00:07:31.340 }, 00:07:31.340 { 00:07:31.340 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:07:31.340 "dma_device_type": 2 00:07:31.340 } 00:07:31.340 ], 00:07:31.340 "driver_specific": { 00:07:31.340 "passthru": { 00:07:31.340 "name": "Passthru0", 00:07:31.340 "base_bdev_name": "Malloc0" 00:07:31.340 } 00:07:31.340 } 00:07:31.340 } 00:07:31.340 ]' 00:07:31.340 18:05:41 rpc.rpc_integrity -- rpc/rpc.sh@21 -- # jq length 00:07:31.340 18:05:41 rpc.rpc_integrity -- rpc/rpc.sh@21 -- # '[' 2 == 2 ']' 00:07:31.340 18:05:41 rpc.rpc_integrity -- rpc/rpc.sh@23 -- # rpc_cmd bdev_passthru_delete Passthru0 00:07:31.340 18:05:41 rpc.rpc_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:31.340 18:05:41 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:07:31.340 18:05:41 rpc.rpc_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:31.340 18:05:41 rpc.rpc_integrity -- rpc/rpc.sh@24 -- # rpc_cmd bdev_malloc_delete Malloc0 00:07:31.340 18:05:41 rpc.rpc_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:31.340 18:05:41 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:07:31.340 18:05:41 rpc.rpc_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:31.340 18:05:41 rpc.rpc_integrity -- rpc/rpc.sh@25 -- # rpc_cmd bdev_get_bdevs 00:07:31.340 18:05:41 rpc.rpc_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:31.340 18:05:41 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:07:31.340 18:05:41 rpc.rpc_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:31.340 18:05:41 rpc.rpc_integrity -- rpc/rpc.sh@25 -- # bdevs='[]' 00:07:31.340 18:05:41 rpc.rpc_integrity -- rpc/rpc.sh@26 -- # jq length 00:07:31.340 ************************************ 00:07:31.340 END TEST rpc_integrity 00:07:31.340 ************************************ 00:07:31.340 18:05:41 rpc.rpc_integrity -- rpc/rpc.sh@26 -- # '[' 0 == 0 ']' 00:07:31.340 00:07:31.340 real 0m0.362s 00:07:31.340 user 0m0.194s 00:07:31.340 sys 0m0.061s 00:07:31.340 18:05:41 rpc.rpc_integrity -- common/autotest_common.sh@1130 -- # xtrace_disable 00:07:31.340 18:05:41 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:07:31.599 18:05:41 rpc -- rpc/rpc.sh@74 -- # run_test rpc_plugins rpc_plugins 00:07:31.599 18:05:41 rpc -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:07:31.599 18:05:41 rpc -- common/autotest_common.sh@1111 -- # xtrace_disable 00:07:31.599 18:05:41 rpc -- common/autotest_common.sh@10 -- # set +x 00:07:31.599 ************************************ 00:07:31.599 START TEST rpc_plugins 00:07:31.599 ************************************ 00:07:31.599 18:05:41 rpc.rpc_plugins -- common/autotest_common.sh@1129 -- # rpc_plugins 00:07:31.599 18:05:41 rpc.rpc_plugins -- rpc/rpc.sh@30 -- # rpc_cmd --plugin rpc_plugin create_malloc 00:07:31.599 18:05:41 rpc.rpc_plugins -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:31.599 18:05:41 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x 00:07:31.599 18:05:41 rpc.rpc_plugins -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:31.599 18:05:41 rpc.rpc_plugins -- rpc/rpc.sh@30 -- # malloc=Malloc1 00:07:31.599 18:05:41 rpc.rpc_plugins -- rpc/rpc.sh@31 -- # rpc_cmd bdev_get_bdevs 00:07:31.599 18:05:41 rpc.rpc_plugins -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:31.599 18:05:41 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x 00:07:31.599 18:05:41 rpc.rpc_plugins -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:31.599 18:05:41 rpc.rpc_plugins -- rpc/rpc.sh@31 -- # bdevs='[ 00:07:31.599 { 00:07:31.599 "name": "Malloc1", 00:07:31.599 "aliases": [ 00:07:31.599 "111560ff-982e-419e-ab77-f10759cbd6e8" 00:07:31.599 ], 00:07:31.599 "product_name": "Malloc disk", 00:07:31.599 "block_size": 4096, 00:07:31.599 "num_blocks": 256, 00:07:31.599 "uuid": "111560ff-982e-419e-ab77-f10759cbd6e8", 00:07:31.599 "assigned_rate_limits": { 00:07:31.599 "rw_ios_per_sec": 0, 00:07:31.599 "rw_mbytes_per_sec": 0, 00:07:31.599 "r_mbytes_per_sec": 0, 00:07:31.599 "w_mbytes_per_sec": 0 00:07:31.599 }, 00:07:31.599 "claimed": false, 00:07:31.599 "zoned": false, 00:07:31.599 "supported_io_types": { 00:07:31.599 "read": true, 00:07:31.599 "write": true, 00:07:31.599 "unmap": true, 00:07:31.599 "flush": true, 00:07:31.599 "reset": true, 00:07:31.599 "nvme_admin": false, 00:07:31.599 "nvme_io": false, 00:07:31.599 "nvme_io_md": false, 00:07:31.599 "write_zeroes": true, 00:07:31.599 "zcopy": true, 00:07:31.599 "get_zone_info": false, 00:07:31.599 "zone_management": false, 00:07:31.599 "zone_append": false, 00:07:31.599 "compare": false, 00:07:31.599 "compare_and_write": false, 00:07:31.599 "abort": true, 00:07:31.599 "seek_hole": false, 00:07:31.599 "seek_data": false, 00:07:31.599 "copy": true, 00:07:31.599 "nvme_iov_md": false 00:07:31.599 }, 00:07:31.599 "memory_domains": [ 00:07:31.599 { 00:07:31.599 "dma_device_id": "system", 00:07:31.599 "dma_device_type": 1 00:07:31.599 }, 00:07:31.599 { 00:07:31.599 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:07:31.599 "dma_device_type": 2 00:07:31.599 } 00:07:31.599 ], 00:07:31.599 "driver_specific": {} 00:07:31.599 } 00:07:31.599 ]' 00:07:31.599 18:05:41 rpc.rpc_plugins -- rpc/rpc.sh@32 -- # jq length 00:07:31.600 18:05:42 rpc.rpc_plugins -- rpc/rpc.sh@32 -- # '[' 1 == 1 ']' 00:07:31.600 18:05:42 rpc.rpc_plugins -- rpc/rpc.sh@34 -- # rpc_cmd --plugin rpc_plugin delete_malloc Malloc1 00:07:31.600 18:05:42 rpc.rpc_plugins -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:31.600 18:05:42 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x 00:07:31.600 18:05:42 rpc.rpc_plugins -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:31.600 18:05:42 rpc.rpc_plugins -- rpc/rpc.sh@35 -- # rpc_cmd bdev_get_bdevs 00:07:31.600 18:05:42 rpc.rpc_plugins -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:31.600 18:05:42 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x 00:07:31.600 18:05:42 rpc.rpc_plugins -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:31.600 18:05:42 rpc.rpc_plugins -- rpc/rpc.sh@35 -- # bdevs='[]' 00:07:31.600 18:05:42 rpc.rpc_plugins -- rpc/rpc.sh@36 -- # jq length 00:07:31.600 ************************************ 00:07:31.600 END TEST rpc_plugins 00:07:31.600 ************************************ 00:07:31.600 18:05:42 rpc.rpc_plugins -- rpc/rpc.sh@36 -- # '[' 0 == 0 ']' 00:07:31.600 00:07:31.600 real 0m0.177s 00:07:31.600 user 0m0.099s 00:07:31.600 sys 0m0.029s 00:07:31.600 18:05:42 rpc.rpc_plugins -- common/autotest_common.sh@1130 -- # xtrace_disable 00:07:31.600 18:05:42 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x 00:07:31.600 18:05:42 rpc -- rpc/rpc.sh@75 -- # run_test rpc_trace_cmd_test rpc_trace_cmd_test 00:07:31.600 18:05:42 rpc -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:07:31.600 18:05:42 rpc -- common/autotest_common.sh@1111 -- # xtrace_disable 00:07:31.600 18:05:42 rpc -- common/autotest_common.sh@10 -- # set +x 00:07:31.600 ************************************ 00:07:31.600 START TEST rpc_trace_cmd_test 00:07:31.600 ************************************ 00:07:31.600 18:05:42 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@1129 -- # rpc_trace_cmd_test 00:07:31.859 18:05:42 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@40 -- # local info 00:07:31.859 18:05:42 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@42 -- # rpc_cmd trace_get_info 00:07:31.859 18:05:42 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:31.859 18:05:42 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@10 -- # set +x 00:07:31.859 18:05:42 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:31.859 18:05:42 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@42 -- # info='{ 00:07:31.859 "tpoint_shm_path": "/dev/shm/spdk_tgt_trace.pid57798", 00:07:31.859 "tpoint_group_mask": "0x8", 00:07:31.859 "iscsi_conn": { 00:07:31.859 "mask": "0x2", 00:07:31.859 "tpoint_mask": "0x0" 00:07:31.859 }, 00:07:31.859 "scsi": { 00:07:31.859 "mask": "0x4", 00:07:31.859 "tpoint_mask": "0x0" 00:07:31.859 }, 00:07:31.859 "bdev": { 00:07:31.859 "mask": "0x8", 00:07:31.859 "tpoint_mask": "0xffffffffffffffff" 00:07:31.859 }, 00:07:31.859 "nvmf_rdma": { 00:07:31.859 "mask": "0x10", 00:07:31.859 "tpoint_mask": "0x0" 00:07:31.859 }, 00:07:31.859 "nvmf_tcp": { 00:07:31.859 "mask": "0x20", 00:07:31.859 "tpoint_mask": "0x0" 00:07:31.859 }, 00:07:31.859 "ftl": { 00:07:31.859 "mask": "0x40", 00:07:31.859 "tpoint_mask": "0x0" 00:07:31.859 }, 00:07:31.859 "blobfs": { 00:07:31.859 "mask": "0x80", 00:07:31.859 "tpoint_mask": "0x0" 00:07:31.859 }, 00:07:31.859 "dsa": { 00:07:31.859 "mask": "0x200", 00:07:31.859 "tpoint_mask": "0x0" 00:07:31.859 }, 00:07:31.859 "thread": { 00:07:31.859 "mask": "0x400", 00:07:31.859 "tpoint_mask": "0x0" 00:07:31.859 }, 00:07:31.859 "nvme_pcie": { 00:07:31.859 "mask": "0x800", 00:07:31.859 "tpoint_mask": "0x0" 00:07:31.859 }, 00:07:31.859 "iaa": { 00:07:31.859 "mask": "0x1000", 00:07:31.859 "tpoint_mask": "0x0" 00:07:31.859 }, 00:07:31.859 "nvme_tcp": { 00:07:31.859 "mask": "0x2000", 00:07:31.859 "tpoint_mask": "0x0" 00:07:31.859 }, 00:07:31.859 "bdev_nvme": { 00:07:31.859 "mask": "0x4000", 00:07:31.859 "tpoint_mask": "0x0" 00:07:31.859 }, 00:07:31.859 "sock": { 00:07:31.859 "mask": "0x8000", 00:07:31.859 "tpoint_mask": "0x0" 00:07:31.859 }, 00:07:31.859 "blob": { 00:07:31.859 "mask": "0x10000", 00:07:31.859 "tpoint_mask": "0x0" 00:07:31.859 }, 00:07:31.859 "bdev_raid": { 00:07:31.859 "mask": "0x20000", 00:07:31.859 "tpoint_mask": "0x0" 00:07:31.859 }, 00:07:31.859 "scheduler": { 00:07:31.859 "mask": "0x40000", 00:07:31.859 "tpoint_mask": "0x0" 00:07:31.859 } 00:07:31.859 }' 00:07:31.859 18:05:42 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@43 -- # jq length 00:07:31.859 18:05:42 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@43 -- # '[' 19 -gt 2 ']' 00:07:31.859 18:05:42 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@44 -- # jq 'has("tpoint_group_mask")' 00:07:31.859 18:05:42 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@44 -- # '[' true = true ']' 00:07:31.859 18:05:42 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@45 -- # jq 'has("tpoint_shm_path")' 00:07:31.859 18:05:42 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@45 -- # '[' true = true ']' 00:07:31.859 18:05:42 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@46 -- # jq 'has("bdev")' 00:07:31.859 18:05:42 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@46 -- # '[' true = true ']' 00:07:31.859 18:05:42 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@47 -- # jq -r .bdev.tpoint_mask 00:07:31.859 ************************************ 00:07:31.859 END TEST rpc_trace_cmd_test 00:07:31.859 ************************************ 00:07:31.859 18:05:42 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@47 -- # '[' 0xffffffffffffffff '!=' 0x0 ']' 00:07:31.859 00:07:31.859 real 0m0.244s 00:07:31.859 user 0m0.188s 00:07:31.859 sys 0m0.045s 00:07:31.859 18:05:42 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@1130 -- # xtrace_disable 00:07:31.859 18:05:42 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@10 -- # set +x 00:07:32.131 18:05:42 rpc -- rpc/rpc.sh@76 -- # [[ 0 -eq 1 ]] 00:07:32.131 18:05:42 rpc -- rpc/rpc.sh@80 -- # rpc=rpc_cmd 00:07:32.131 18:05:42 rpc -- rpc/rpc.sh@81 -- # run_test rpc_daemon_integrity rpc_integrity 00:07:32.131 18:05:42 rpc -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:07:32.131 18:05:42 rpc -- common/autotest_common.sh@1111 -- # xtrace_disable 00:07:32.131 18:05:42 rpc -- common/autotest_common.sh@10 -- # set +x 00:07:32.131 ************************************ 00:07:32.131 START TEST rpc_daemon_integrity 00:07:32.131 ************************************ 00:07:32.131 18:05:42 rpc.rpc_daemon_integrity -- common/autotest_common.sh@1129 -- # rpc_integrity 00:07:32.131 18:05:42 rpc.rpc_daemon_integrity -- rpc/rpc.sh@12 -- # rpc_cmd bdev_get_bdevs 00:07:32.131 18:05:42 rpc.rpc_daemon_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:32.131 18:05:42 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:07:32.131 18:05:42 rpc.rpc_daemon_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:32.131 18:05:42 rpc.rpc_daemon_integrity -- rpc/rpc.sh@12 -- # bdevs='[]' 00:07:32.131 18:05:42 rpc.rpc_daemon_integrity -- rpc/rpc.sh@13 -- # jq length 00:07:32.131 18:05:42 rpc.rpc_daemon_integrity -- rpc/rpc.sh@13 -- # '[' 0 == 0 ']' 00:07:32.131 18:05:42 rpc.rpc_daemon_integrity -- rpc/rpc.sh@15 -- # rpc_cmd bdev_malloc_create 8 512 00:07:32.131 18:05:42 rpc.rpc_daemon_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:32.131 18:05:42 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:07:32.131 18:05:42 rpc.rpc_daemon_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:32.131 18:05:42 rpc.rpc_daemon_integrity -- rpc/rpc.sh@15 -- # malloc=Malloc2 00:07:32.131 18:05:42 rpc.rpc_daemon_integrity -- rpc/rpc.sh@16 -- # rpc_cmd bdev_get_bdevs 00:07:32.131 18:05:42 rpc.rpc_daemon_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:32.131 18:05:42 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:07:32.131 18:05:42 rpc.rpc_daemon_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:32.131 18:05:42 rpc.rpc_daemon_integrity -- rpc/rpc.sh@16 -- # bdevs='[ 00:07:32.131 { 00:07:32.131 "name": "Malloc2", 00:07:32.131 "aliases": [ 00:07:32.131 "a98fc62a-7dd3-4ae6-87b9-6ff915e009e1" 00:07:32.131 ], 00:07:32.131 "product_name": "Malloc disk", 00:07:32.131 "block_size": 512, 00:07:32.131 "num_blocks": 16384, 00:07:32.131 "uuid": "a98fc62a-7dd3-4ae6-87b9-6ff915e009e1", 00:07:32.131 "assigned_rate_limits": { 00:07:32.131 "rw_ios_per_sec": 0, 00:07:32.131 "rw_mbytes_per_sec": 0, 00:07:32.131 "r_mbytes_per_sec": 0, 00:07:32.131 "w_mbytes_per_sec": 0 00:07:32.131 }, 00:07:32.131 "claimed": false, 00:07:32.131 "zoned": false, 00:07:32.131 "supported_io_types": { 00:07:32.131 "read": true, 00:07:32.131 "write": true, 00:07:32.131 "unmap": true, 00:07:32.131 "flush": true, 00:07:32.131 "reset": true, 00:07:32.131 "nvme_admin": false, 00:07:32.131 "nvme_io": false, 00:07:32.131 "nvme_io_md": false, 00:07:32.131 "write_zeroes": true, 00:07:32.131 "zcopy": true, 00:07:32.131 "get_zone_info": false, 00:07:32.131 "zone_management": false, 00:07:32.131 "zone_append": false, 00:07:32.131 "compare": false, 00:07:32.131 "compare_and_write": false, 00:07:32.131 "abort": true, 00:07:32.131 "seek_hole": false, 00:07:32.131 "seek_data": false, 00:07:32.131 "copy": true, 00:07:32.131 "nvme_iov_md": false 00:07:32.131 }, 00:07:32.131 "memory_domains": [ 00:07:32.131 { 00:07:32.131 "dma_device_id": "system", 00:07:32.131 "dma_device_type": 1 00:07:32.131 }, 00:07:32.131 { 00:07:32.131 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:07:32.131 "dma_device_type": 2 00:07:32.131 } 00:07:32.131 ], 00:07:32.131 "driver_specific": {} 00:07:32.131 } 00:07:32.131 ]' 00:07:32.131 18:05:42 rpc.rpc_daemon_integrity -- rpc/rpc.sh@17 -- # jq length 00:07:32.131 18:05:42 rpc.rpc_daemon_integrity -- rpc/rpc.sh@17 -- # '[' 1 == 1 ']' 00:07:32.131 18:05:42 rpc.rpc_daemon_integrity -- rpc/rpc.sh@19 -- # rpc_cmd bdev_passthru_create -b Malloc2 -p Passthru0 00:07:32.131 18:05:42 rpc.rpc_daemon_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:32.131 18:05:42 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:07:32.131 [2024-12-06 18:05:42.651590] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on Malloc2 00:07:32.131 [2024-12-06 18:05:42.651667] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:07:32.131 [2024-12-06 18:05:42.651694] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009680 00:07:32.131 [2024-12-06 18:05:42.651709] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:07:32.131 [2024-12-06 18:05:42.654257] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:07:32.131 [2024-12-06 18:05:42.654315] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: Passthru0 00:07:32.131 Passthru0 00:07:32.131 18:05:42 rpc.rpc_daemon_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:32.131 18:05:42 rpc.rpc_daemon_integrity -- rpc/rpc.sh@20 -- # rpc_cmd bdev_get_bdevs 00:07:32.131 18:05:42 rpc.rpc_daemon_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:32.131 18:05:42 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:07:32.131 18:05:42 rpc.rpc_daemon_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:32.131 18:05:42 rpc.rpc_daemon_integrity -- rpc/rpc.sh@20 -- # bdevs='[ 00:07:32.131 { 00:07:32.131 "name": "Malloc2", 00:07:32.131 "aliases": [ 00:07:32.131 "a98fc62a-7dd3-4ae6-87b9-6ff915e009e1" 00:07:32.131 ], 00:07:32.131 "product_name": "Malloc disk", 00:07:32.131 "block_size": 512, 00:07:32.131 "num_blocks": 16384, 00:07:32.131 "uuid": "a98fc62a-7dd3-4ae6-87b9-6ff915e009e1", 00:07:32.131 "assigned_rate_limits": { 00:07:32.131 "rw_ios_per_sec": 0, 00:07:32.131 "rw_mbytes_per_sec": 0, 00:07:32.131 "r_mbytes_per_sec": 0, 00:07:32.131 "w_mbytes_per_sec": 0 00:07:32.131 }, 00:07:32.131 "claimed": true, 00:07:32.131 "claim_type": "exclusive_write", 00:07:32.131 "zoned": false, 00:07:32.131 "supported_io_types": { 00:07:32.131 "read": true, 00:07:32.131 "write": true, 00:07:32.131 "unmap": true, 00:07:32.131 "flush": true, 00:07:32.131 "reset": true, 00:07:32.131 "nvme_admin": false, 00:07:32.131 "nvme_io": false, 00:07:32.131 "nvme_io_md": false, 00:07:32.131 "write_zeroes": true, 00:07:32.131 "zcopy": true, 00:07:32.131 "get_zone_info": false, 00:07:32.131 "zone_management": false, 00:07:32.131 "zone_append": false, 00:07:32.131 "compare": false, 00:07:32.131 "compare_and_write": false, 00:07:32.131 "abort": true, 00:07:32.131 "seek_hole": false, 00:07:32.131 "seek_data": false, 00:07:32.131 "copy": true, 00:07:32.131 "nvme_iov_md": false 00:07:32.131 }, 00:07:32.131 "memory_domains": [ 00:07:32.131 { 00:07:32.131 "dma_device_id": "system", 00:07:32.131 "dma_device_type": 1 00:07:32.131 }, 00:07:32.131 { 00:07:32.131 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:07:32.131 "dma_device_type": 2 00:07:32.131 } 00:07:32.131 ], 00:07:32.131 "driver_specific": {} 00:07:32.131 }, 00:07:32.131 { 00:07:32.131 "name": "Passthru0", 00:07:32.131 "aliases": [ 00:07:32.131 "d2b6a316-1ebc-55a2-b4a4-cb65fac4fb9e" 00:07:32.131 ], 00:07:32.131 "product_name": "passthru", 00:07:32.131 "block_size": 512, 00:07:32.131 "num_blocks": 16384, 00:07:32.131 "uuid": "d2b6a316-1ebc-55a2-b4a4-cb65fac4fb9e", 00:07:32.131 "assigned_rate_limits": { 00:07:32.131 "rw_ios_per_sec": 0, 00:07:32.132 "rw_mbytes_per_sec": 0, 00:07:32.132 "r_mbytes_per_sec": 0, 00:07:32.132 "w_mbytes_per_sec": 0 00:07:32.132 }, 00:07:32.132 "claimed": false, 00:07:32.132 "zoned": false, 00:07:32.132 "supported_io_types": { 00:07:32.132 "read": true, 00:07:32.132 "write": true, 00:07:32.132 "unmap": true, 00:07:32.132 "flush": true, 00:07:32.132 "reset": true, 00:07:32.132 "nvme_admin": false, 00:07:32.132 "nvme_io": false, 00:07:32.132 "nvme_io_md": false, 00:07:32.132 "write_zeroes": true, 00:07:32.132 "zcopy": true, 00:07:32.132 "get_zone_info": false, 00:07:32.132 "zone_management": false, 00:07:32.132 "zone_append": false, 00:07:32.132 "compare": false, 00:07:32.132 "compare_and_write": false, 00:07:32.132 "abort": true, 00:07:32.132 "seek_hole": false, 00:07:32.132 "seek_data": false, 00:07:32.132 "copy": true, 00:07:32.132 "nvme_iov_md": false 00:07:32.132 }, 00:07:32.132 "memory_domains": [ 00:07:32.132 { 00:07:32.132 "dma_device_id": "system", 00:07:32.132 "dma_device_type": 1 00:07:32.132 }, 00:07:32.132 { 00:07:32.132 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:07:32.132 "dma_device_type": 2 00:07:32.132 } 00:07:32.132 ], 00:07:32.132 "driver_specific": { 00:07:32.132 "passthru": { 00:07:32.132 "name": "Passthru0", 00:07:32.132 "base_bdev_name": "Malloc2" 00:07:32.132 } 00:07:32.132 } 00:07:32.132 } 00:07:32.132 ]' 00:07:32.406 18:05:42 rpc.rpc_daemon_integrity -- rpc/rpc.sh@21 -- # jq length 00:07:32.406 18:05:42 rpc.rpc_daemon_integrity -- rpc/rpc.sh@21 -- # '[' 2 == 2 ']' 00:07:32.406 18:05:42 rpc.rpc_daemon_integrity -- rpc/rpc.sh@23 -- # rpc_cmd bdev_passthru_delete Passthru0 00:07:32.406 18:05:42 rpc.rpc_daemon_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:32.406 18:05:42 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:07:32.406 18:05:42 rpc.rpc_daemon_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:32.406 18:05:42 rpc.rpc_daemon_integrity -- rpc/rpc.sh@24 -- # rpc_cmd bdev_malloc_delete Malloc2 00:07:32.406 18:05:42 rpc.rpc_daemon_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:32.406 18:05:42 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:07:32.406 18:05:42 rpc.rpc_daemon_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:32.406 18:05:42 rpc.rpc_daemon_integrity -- rpc/rpc.sh@25 -- # rpc_cmd bdev_get_bdevs 00:07:32.406 18:05:42 rpc.rpc_daemon_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:32.406 18:05:42 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:07:32.406 18:05:42 rpc.rpc_daemon_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:32.406 18:05:42 rpc.rpc_daemon_integrity -- rpc/rpc.sh@25 -- # bdevs='[]' 00:07:32.407 18:05:42 rpc.rpc_daemon_integrity -- rpc/rpc.sh@26 -- # jq length 00:07:32.407 ************************************ 00:07:32.407 END TEST rpc_daemon_integrity 00:07:32.407 ************************************ 00:07:32.407 18:05:42 rpc.rpc_daemon_integrity -- rpc/rpc.sh@26 -- # '[' 0 == 0 ']' 00:07:32.407 00:07:32.407 real 0m0.343s 00:07:32.407 user 0m0.180s 00:07:32.407 sys 0m0.064s 00:07:32.407 18:05:42 rpc.rpc_daemon_integrity -- common/autotest_common.sh@1130 -- # xtrace_disable 00:07:32.407 18:05:42 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:07:32.407 18:05:42 rpc -- rpc/rpc.sh@83 -- # trap - SIGINT SIGTERM EXIT 00:07:32.407 18:05:42 rpc -- rpc/rpc.sh@84 -- # killprocess 57798 00:07:32.407 18:05:42 rpc -- common/autotest_common.sh@954 -- # '[' -z 57798 ']' 00:07:32.407 18:05:42 rpc -- common/autotest_common.sh@958 -- # kill -0 57798 00:07:32.407 18:05:42 rpc -- common/autotest_common.sh@959 -- # uname 00:07:32.407 18:05:42 rpc -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:07:32.407 18:05:42 rpc -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 57798 00:07:32.407 18:05:42 rpc -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:07:32.407 killing process with pid 57798 00:07:32.407 18:05:42 rpc -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:07:32.407 18:05:42 rpc -- common/autotest_common.sh@972 -- # echo 'killing process with pid 57798' 00:07:32.407 18:05:42 rpc -- common/autotest_common.sh@973 -- # kill 57798 00:07:32.407 18:05:42 rpc -- common/autotest_common.sh@978 -- # wait 57798 00:07:34.930 00:07:34.930 real 0m5.452s 00:07:34.930 user 0m5.931s 00:07:34.930 sys 0m1.061s 00:07:34.930 18:05:45 rpc -- common/autotest_common.sh@1130 -- # xtrace_disable 00:07:34.930 18:05:45 rpc -- common/autotest_common.sh@10 -- # set +x 00:07:34.930 ************************************ 00:07:34.930 END TEST rpc 00:07:34.930 ************************************ 00:07:34.930 18:05:45 -- spdk/autotest.sh@157 -- # run_test skip_rpc /home/vagrant/spdk_repo/spdk/test/rpc/skip_rpc.sh 00:07:34.930 18:05:45 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:07:34.930 18:05:45 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:07:34.930 18:05:45 -- common/autotest_common.sh@10 -- # set +x 00:07:34.930 ************************************ 00:07:34.930 START TEST skip_rpc 00:07:34.930 ************************************ 00:07:34.930 18:05:45 skip_rpc -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/rpc/skip_rpc.sh 00:07:35.188 * Looking for test storage... 00:07:35.188 * Found test storage at /home/vagrant/spdk_repo/spdk/test/rpc 00:07:35.188 18:05:45 skip_rpc -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:07:35.188 18:05:45 skip_rpc -- common/autotest_common.sh@1711 -- # lcov --version 00:07:35.188 18:05:45 skip_rpc -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:07:35.188 18:05:45 skip_rpc -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:07:35.188 18:05:45 skip_rpc -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:07:35.188 18:05:45 skip_rpc -- scripts/common.sh@333 -- # local ver1 ver1_l 00:07:35.188 18:05:45 skip_rpc -- scripts/common.sh@334 -- # local ver2 ver2_l 00:07:35.188 18:05:45 skip_rpc -- scripts/common.sh@336 -- # IFS=.-: 00:07:35.188 18:05:45 skip_rpc -- scripts/common.sh@336 -- # read -ra ver1 00:07:35.188 18:05:45 skip_rpc -- scripts/common.sh@337 -- # IFS=.-: 00:07:35.188 18:05:45 skip_rpc -- scripts/common.sh@337 -- # read -ra ver2 00:07:35.188 18:05:45 skip_rpc -- scripts/common.sh@338 -- # local 'op=<' 00:07:35.188 18:05:45 skip_rpc -- scripts/common.sh@340 -- # ver1_l=2 00:07:35.188 18:05:45 skip_rpc -- scripts/common.sh@341 -- # ver2_l=1 00:07:35.188 18:05:45 skip_rpc -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:07:35.188 18:05:45 skip_rpc -- scripts/common.sh@344 -- # case "$op" in 00:07:35.188 18:05:45 skip_rpc -- scripts/common.sh@345 -- # : 1 00:07:35.188 18:05:45 skip_rpc -- scripts/common.sh@364 -- # (( v = 0 )) 00:07:35.188 18:05:45 skip_rpc -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:07:35.188 18:05:45 skip_rpc -- scripts/common.sh@365 -- # decimal 1 00:07:35.188 18:05:45 skip_rpc -- scripts/common.sh@353 -- # local d=1 00:07:35.188 18:05:45 skip_rpc -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:07:35.188 18:05:45 skip_rpc -- scripts/common.sh@355 -- # echo 1 00:07:35.189 18:05:45 skip_rpc -- scripts/common.sh@365 -- # ver1[v]=1 00:07:35.189 18:05:45 skip_rpc -- scripts/common.sh@366 -- # decimal 2 00:07:35.189 18:05:45 skip_rpc -- scripts/common.sh@353 -- # local d=2 00:07:35.189 18:05:45 skip_rpc -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:07:35.189 18:05:45 skip_rpc -- scripts/common.sh@355 -- # echo 2 00:07:35.189 18:05:45 skip_rpc -- scripts/common.sh@366 -- # ver2[v]=2 00:07:35.189 18:05:45 skip_rpc -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:07:35.189 18:05:45 skip_rpc -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:07:35.189 18:05:45 skip_rpc -- scripts/common.sh@368 -- # return 0 00:07:35.189 18:05:45 skip_rpc -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:07:35.189 18:05:45 skip_rpc -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:07:35.189 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:35.189 --rc genhtml_branch_coverage=1 00:07:35.189 --rc genhtml_function_coverage=1 00:07:35.189 --rc genhtml_legend=1 00:07:35.189 --rc geninfo_all_blocks=1 00:07:35.189 --rc geninfo_unexecuted_blocks=1 00:07:35.189 00:07:35.189 ' 00:07:35.189 18:05:45 skip_rpc -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:07:35.189 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:35.189 --rc genhtml_branch_coverage=1 00:07:35.189 --rc genhtml_function_coverage=1 00:07:35.189 --rc genhtml_legend=1 00:07:35.189 --rc geninfo_all_blocks=1 00:07:35.189 --rc geninfo_unexecuted_blocks=1 00:07:35.189 00:07:35.189 ' 00:07:35.189 18:05:45 skip_rpc -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:07:35.189 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:35.189 --rc genhtml_branch_coverage=1 00:07:35.189 --rc genhtml_function_coverage=1 00:07:35.189 --rc genhtml_legend=1 00:07:35.189 --rc geninfo_all_blocks=1 00:07:35.189 --rc geninfo_unexecuted_blocks=1 00:07:35.189 00:07:35.189 ' 00:07:35.189 18:05:45 skip_rpc -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:07:35.189 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:35.189 --rc genhtml_branch_coverage=1 00:07:35.189 --rc genhtml_function_coverage=1 00:07:35.189 --rc genhtml_legend=1 00:07:35.189 --rc geninfo_all_blocks=1 00:07:35.189 --rc geninfo_unexecuted_blocks=1 00:07:35.189 00:07:35.189 ' 00:07:35.189 18:05:45 skip_rpc -- rpc/skip_rpc.sh@11 -- # CONFIG_PATH=/home/vagrant/spdk_repo/spdk/test/rpc/config.json 00:07:35.189 18:05:45 skip_rpc -- rpc/skip_rpc.sh@12 -- # LOG_PATH=/home/vagrant/spdk_repo/spdk/test/rpc/log.txt 00:07:35.189 18:05:45 skip_rpc -- rpc/skip_rpc.sh@73 -- # run_test skip_rpc test_skip_rpc 00:07:35.189 18:05:45 skip_rpc -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:07:35.189 18:05:45 skip_rpc -- common/autotest_common.sh@1111 -- # xtrace_disable 00:07:35.189 18:05:45 skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:07:35.189 ************************************ 00:07:35.189 START TEST skip_rpc 00:07:35.189 ************************************ 00:07:35.189 18:05:45 skip_rpc.skip_rpc -- common/autotest_common.sh@1129 -- # test_skip_rpc 00:07:35.189 18:05:45 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@15 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 00:07:35.189 18:05:45 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@16 -- # local spdk_pid=58027 00:07:35.189 18:05:45 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@18 -- # trap 'killprocess $spdk_pid; exit 1' SIGINT SIGTERM EXIT 00:07:35.189 18:05:45 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@19 -- # sleep 5 00:07:35.446 [2024-12-06 18:05:45.783670] Starting SPDK v25.01-pre git sha1 1148849d6 / DPDK 24.03.0 initialization... 00:07:35.446 [2024-12-06 18:05:45.784000] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid58027 ] 00:07:35.446 [2024-12-06 18:05:45.965016] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:35.703 [2024-12-06 18:05:46.085105] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:07:40.968 18:05:50 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@21 -- # NOT rpc_cmd spdk_get_version 00:07:40.968 18:05:50 skip_rpc.skip_rpc -- common/autotest_common.sh@652 -- # local es=0 00:07:40.968 18:05:50 skip_rpc.skip_rpc -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd spdk_get_version 00:07:40.968 18:05:50 skip_rpc.skip_rpc -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 00:07:40.968 18:05:50 skip_rpc.skip_rpc -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:07:40.968 18:05:50 skip_rpc.skip_rpc -- common/autotest_common.sh@644 -- # type -t rpc_cmd 00:07:40.968 18:05:50 skip_rpc.skip_rpc -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:07:40.968 18:05:50 skip_rpc.skip_rpc -- common/autotest_common.sh@655 -- # rpc_cmd spdk_get_version 00:07:40.968 18:05:50 skip_rpc.skip_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:40.968 18:05:50 skip_rpc.skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:07:40.968 18:05:50 skip_rpc.skip_rpc -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:07:40.968 18:05:50 skip_rpc.skip_rpc -- common/autotest_common.sh@655 -- # es=1 00:07:40.968 18:05:50 skip_rpc.skip_rpc -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:07:40.968 18:05:50 skip_rpc.skip_rpc -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:07:40.968 18:05:50 skip_rpc.skip_rpc -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:07:40.968 18:05:50 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@22 -- # trap - SIGINT SIGTERM EXIT 00:07:40.968 18:05:50 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@23 -- # killprocess 58027 00:07:40.968 18:05:50 skip_rpc.skip_rpc -- common/autotest_common.sh@954 -- # '[' -z 58027 ']' 00:07:40.968 18:05:50 skip_rpc.skip_rpc -- common/autotest_common.sh@958 -- # kill -0 58027 00:07:40.968 18:05:50 skip_rpc.skip_rpc -- common/autotest_common.sh@959 -- # uname 00:07:40.968 18:05:50 skip_rpc.skip_rpc -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:07:40.968 18:05:50 skip_rpc.skip_rpc -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 58027 00:07:40.968 killing process with pid 58027 00:07:40.968 18:05:50 skip_rpc.skip_rpc -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:07:40.968 18:05:50 skip_rpc.skip_rpc -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:07:40.968 18:05:50 skip_rpc.skip_rpc -- common/autotest_common.sh@972 -- # echo 'killing process with pid 58027' 00:07:40.968 18:05:50 skip_rpc.skip_rpc -- common/autotest_common.sh@973 -- # kill 58027 00:07:40.968 18:05:50 skip_rpc.skip_rpc -- common/autotest_common.sh@978 -- # wait 58027 00:07:42.871 00:07:42.871 real 0m7.461s 00:07:42.871 user 0m6.954s 00:07:42.871 sys 0m0.417s 00:07:42.871 18:05:53 skip_rpc.skip_rpc -- common/autotest_common.sh@1130 -- # xtrace_disable 00:07:42.871 18:05:53 skip_rpc.skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:07:42.871 ************************************ 00:07:42.871 END TEST skip_rpc 00:07:42.871 ************************************ 00:07:42.871 18:05:53 skip_rpc -- rpc/skip_rpc.sh@74 -- # run_test skip_rpc_with_json test_skip_rpc_with_json 00:07:42.871 18:05:53 skip_rpc -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:07:42.871 18:05:53 skip_rpc -- common/autotest_common.sh@1111 -- # xtrace_disable 00:07:42.871 18:05:53 skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:07:42.871 ************************************ 00:07:42.871 START TEST skip_rpc_with_json 00:07:42.871 ************************************ 00:07:42.871 18:05:53 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@1129 -- # test_skip_rpc_with_json 00:07:42.871 18:05:53 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@44 -- # gen_json_config 00:07:42.871 18:05:53 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@28 -- # local spdk_pid=58131 00:07:42.871 18:05:53 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@27 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 00:07:42.871 18:05:53 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@30 -- # trap 'killprocess $spdk_pid; exit 1' SIGINT SIGTERM EXIT 00:07:42.871 18:05:53 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@31 -- # waitforlisten 58131 00:07:42.871 18:05:53 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@835 -- # '[' -z 58131 ']' 00:07:42.871 18:05:53 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:42.871 18:05:53 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@840 -- # local max_retries=100 00:07:42.871 18:05:53 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:42.871 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:42.871 18:05:53 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@844 -- # xtrace_disable 00:07:42.871 18:05:53 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x 00:07:42.871 [2024-12-06 18:05:53.306396] Starting SPDK v25.01-pre git sha1 1148849d6 / DPDK 24.03.0 initialization... 00:07:42.871 [2024-12-06 18:05:53.306699] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid58131 ] 00:07:43.129 [2024-12-06 18:05:53.488504] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:43.129 [2024-12-06 18:05:53.603040] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:07:44.062 18:05:54 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:07:44.062 18:05:54 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@868 -- # return 0 00:07:44.062 18:05:54 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@34 -- # rpc_cmd nvmf_get_transports --trtype tcp 00:07:44.062 18:05:54 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:44.062 18:05:54 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x 00:07:44.062 [2024-12-06 18:05:54.510926] nvmf_rpc.c:2707:rpc_nvmf_get_transports: *ERROR*: transport 'tcp' does not exist 00:07:44.062 request: 00:07:44.062 { 00:07:44.062 "trtype": "tcp", 00:07:44.062 "method": "nvmf_get_transports", 00:07:44.062 "req_id": 1 00:07:44.062 } 00:07:44.062 Got JSON-RPC error response 00:07:44.062 response: 00:07:44.062 { 00:07:44.062 "code": -19, 00:07:44.062 "message": "No such device" 00:07:44.062 } 00:07:44.062 18:05:54 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:07:44.062 18:05:54 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@34 -- # rpc_cmd nvmf_create_transport -t tcp 00:07:44.062 18:05:54 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:44.062 18:05:54 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x 00:07:44.062 [2024-12-06 18:05:54.523041] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:07:44.062 18:05:54 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:44.062 18:05:54 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@36 -- # rpc_cmd save_config 00:07:44.062 18:05:54 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:44.062 18:05:54 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x 00:07:44.319 18:05:54 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:44.319 18:05:54 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@37 -- # cat /home/vagrant/spdk_repo/spdk/test/rpc/config.json 00:07:44.319 { 00:07:44.319 "subsystems": [ 00:07:44.319 { 00:07:44.319 "subsystem": "fsdev", 00:07:44.319 "config": [ 00:07:44.319 { 00:07:44.319 "method": "fsdev_set_opts", 00:07:44.319 "params": { 00:07:44.319 "fsdev_io_pool_size": 65535, 00:07:44.319 "fsdev_io_cache_size": 256 00:07:44.319 } 00:07:44.319 } 00:07:44.319 ] 00:07:44.319 }, 00:07:44.319 { 00:07:44.319 "subsystem": "keyring", 00:07:44.319 "config": [] 00:07:44.319 }, 00:07:44.319 { 00:07:44.319 "subsystem": "iobuf", 00:07:44.319 "config": [ 00:07:44.319 { 00:07:44.319 "method": "iobuf_set_options", 00:07:44.319 "params": { 00:07:44.319 "small_pool_count": 8192, 00:07:44.319 "large_pool_count": 1024, 00:07:44.319 "small_bufsize": 8192, 00:07:44.319 "large_bufsize": 135168, 00:07:44.319 "enable_numa": false 00:07:44.319 } 00:07:44.319 } 00:07:44.319 ] 00:07:44.319 }, 00:07:44.319 { 00:07:44.319 "subsystem": "sock", 00:07:44.319 "config": [ 00:07:44.319 { 00:07:44.319 "method": "sock_set_default_impl", 00:07:44.319 "params": { 00:07:44.319 "impl_name": "posix" 00:07:44.319 } 00:07:44.319 }, 00:07:44.319 { 00:07:44.319 "method": "sock_impl_set_options", 00:07:44.319 "params": { 00:07:44.319 "impl_name": "ssl", 00:07:44.319 "recv_buf_size": 4096, 00:07:44.319 "send_buf_size": 4096, 00:07:44.319 "enable_recv_pipe": true, 00:07:44.319 "enable_quickack": false, 00:07:44.319 "enable_placement_id": 0, 00:07:44.319 "enable_zerocopy_send_server": true, 00:07:44.319 "enable_zerocopy_send_client": false, 00:07:44.319 "zerocopy_threshold": 0, 00:07:44.319 "tls_version": 0, 00:07:44.319 "enable_ktls": false 00:07:44.319 } 00:07:44.319 }, 00:07:44.319 { 00:07:44.319 "method": "sock_impl_set_options", 00:07:44.319 "params": { 00:07:44.319 "impl_name": "posix", 00:07:44.319 "recv_buf_size": 2097152, 00:07:44.319 "send_buf_size": 2097152, 00:07:44.319 "enable_recv_pipe": true, 00:07:44.319 "enable_quickack": false, 00:07:44.319 "enable_placement_id": 0, 00:07:44.319 "enable_zerocopy_send_server": true, 00:07:44.319 "enable_zerocopy_send_client": false, 00:07:44.319 "zerocopy_threshold": 0, 00:07:44.319 "tls_version": 0, 00:07:44.319 "enable_ktls": false 00:07:44.319 } 00:07:44.319 } 00:07:44.319 ] 00:07:44.319 }, 00:07:44.319 { 00:07:44.319 "subsystem": "vmd", 00:07:44.319 "config": [] 00:07:44.319 }, 00:07:44.319 { 00:07:44.319 "subsystem": "accel", 00:07:44.319 "config": [ 00:07:44.319 { 00:07:44.319 "method": "accel_set_options", 00:07:44.319 "params": { 00:07:44.319 "small_cache_size": 128, 00:07:44.319 "large_cache_size": 16, 00:07:44.319 "task_count": 2048, 00:07:44.319 "sequence_count": 2048, 00:07:44.319 "buf_count": 2048 00:07:44.319 } 00:07:44.319 } 00:07:44.319 ] 00:07:44.319 }, 00:07:44.319 { 00:07:44.319 "subsystem": "bdev", 00:07:44.319 "config": [ 00:07:44.319 { 00:07:44.319 "method": "bdev_set_options", 00:07:44.319 "params": { 00:07:44.319 "bdev_io_pool_size": 65535, 00:07:44.319 "bdev_io_cache_size": 256, 00:07:44.319 "bdev_auto_examine": true, 00:07:44.319 "iobuf_small_cache_size": 128, 00:07:44.319 "iobuf_large_cache_size": 16 00:07:44.319 } 00:07:44.319 }, 00:07:44.319 { 00:07:44.319 "method": "bdev_raid_set_options", 00:07:44.319 "params": { 00:07:44.319 "process_window_size_kb": 1024, 00:07:44.319 "process_max_bandwidth_mb_sec": 0 00:07:44.319 } 00:07:44.319 }, 00:07:44.319 { 00:07:44.319 "method": "bdev_iscsi_set_options", 00:07:44.319 "params": { 00:07:44.319 "timeout_sec": 30 00:07:44.319 } 00:07:44.319 }, 00:07:44.319 { 00:07:44.319 "method": "bdev_nvme_set_options", 00:07:44.319 "params": { 00:07:44.319 "action_on_timeout": "none", 00:07:44.319 "timeout_us": 0, 00:07:44.319 "timeout_admin_us": 0, 00:07:44.319 "keep_alive_timeout_ms": 10000, 00:07:44.319 "arbitration_burst": 0, 00:07:44.319 "low_priority_weight": 0, 00:07:44.319 "medium_priority_weight": 0, 00:07:44.319 "high_priority_weight": 0, 00:07:44.319 "nvme_adminq_poll_period_us": 10000, 00:07:44.319 "nvme_ioq_poll_period_us": 0, 00:07:44.319 "io_queue_requests": 0, 00:07:44.319 "delay_cmd_submit": true, 00:07:44.319 "transport_retry_count": 4, 00:07:44.319 "bdev_retry_count": 3, 00:07:44.319 "transport_ack_timeout": 0, 00:07:44.319 "ctrlr_loss_timeout_sec": 0, 00:07:44.319 "reconnect_delay_sec": 0, 00:07:44.319 "fast_io_fail_timeout_sec": 0, 00:07:44.319 "disable_auto_failback": false, 00:07:44.319 "generate_uuids": false, 00:07:44.319 "transport_tos": 0, 00:07:44.319 "nvme_error_stat": false, 00:07:44.319 "rdma_srq_size": 0, 00:07:44.319 "io_path_stat": false, 00:07:44.319 "allow_accel_sequence": false, 00:07:44.319 "rdma_max_cq_size": 0, 00:07:44.319 "rdma_cm_event_timeout_ms": 0, 00:07:44.319 "dhchap_digests": [ 00:07:44.319 "sha256", 00:07:44.319 "sha384", 00:07:44.319 "sha512" 00:07:44.319 ], 00:07:44.319 "dhchap_dhgroups": [ 00:07:44.319 "null", 00:07:44.319 "ffdhe2048", 00:07:44.319 "ffdhe3072", 00:07:44.319 "ffdhe4096", 00:07:44.319 "ffdhe6144", 00:07:44.319 "ffdhe8192" 00:07:44.319 ], 00:07:44.319 "rdma_umr_per_io": false 00:07:44.319 } 00:07:44.319 }, 00:07:44.319 { 00:07:44.319 "method": "bdev_nvme_set_hotplug", 00:07:44.319 "params": { 00:07:44.319 "period_us": 100000, 00:07:44.319 "enable": false 00:07:44.319 } 00:07:44.319 }, 00:07:44.319 { 00:07:44.319 "method": "bdev_wait_for_examine" 00:07:44.319 } 00:07:44.319 ] 00:07:44.319 }, 00:07:44.319 { 00:07:44.319 "subsystem": "scsi", 00:07:44.319 "config": null 00:07:44.319 }, 00:07:44.319 { 00:07:44.319 "subsystem": "scheduler", 00:07:44.319 "config": [ 00:07:44.319 { 00:07:44.319 "method": "framework_set_scheduler", 00:07:44.319 "params": { 00:07:44.319 "name": "static" 00:07:44.319 } 00:07:44.319 } 00:07:44.319 ] 00:07:44.319 }, 00:07:44.319 { 00:07:44.319 "subsystem": "vhost_scsi", 00:07:44.319 "config": [] 00:07:44.319 }, 00:07:44.319 { 00:07:44.319 "subsystem": "vhost_blk", 00:07:44.319 "config": [] 00:07:44.319 }, 00:07:44.319 { 00:07:44.319 "subsystem": "ublk", 00:07:44.319 "config": [] 00:07:44.319 }, 00:07:44.319 { 00:07:44.319 "subsystem": "nbd", 00:07:44.319 "config": [] 00:07:44.319 }, 00:07:44.319 { 00:07:44.319 "subsystem": "nvmf", 00:07:44.319 "config": [ 00:07:44.319 { 00:07:44.319 "method": "nvmf_set_config", 00:07:44.319 "params": { 00:07:44.319 "discovery_filter": "match_any", 00:07:44.319 "admin_cmd_passthru": { 00:07:44.319 "identify_ctrlr": false 00:07:44.319 }, 00:07:44.320 "dhchap_digests": [ 00:07:44.320 "sha256", 00:07:44.320 "sha384", 00:07:44.320 "sha512" 00:07:44.320 ], 00:07:44.320 "dhchap_dhgroups": [ 00:07:44.320 "null", 00:07:44.320 "ffdhe2048", 00:07:44.320 "ffdhe3072", 00:07:44.320 "ffdhe4096", 00:07:44.320 "ffdhe6144", 00:07:44.320 "ffdhe8192" 00:07:44.320 ] 00:07:44.320 } 00:07:44.320 }, 00:07:44.320 { 00:07:44.320 "method": "nvmf_set_max_subsystems", 00:07:44.320 "params": { 00:07:44.320 "max_subsystems": 1024 00:07:44.320 } 00:07:44.320 }, 00:07:44.320 { 00:07:44.320 "method": "nvmf_set_crdt", 00:07:44.320 "params": { 00:07:44.320 "crdt1": 0, 00:07:44.320 "crdt2": 0, 00:07:44.320 "crdt3": 0 00:07:44.320 } 00:07:44.320 }, 00:07:44.320 { 00:07:44.320 "method": "nvmf_create_transport", 00:07:44.320 "params": { 00:07:44.320 "trtype": "TCP", 00:07:44.320 "max_queue_depth": 128, 00:07:44.320 "max_io_qpairs_per_ctrlr": 127, 00:07:44.320 "in_capsule_data_size": 4096, 00:07:44.320 "max_io_size": 131072, 00:07:44.320 "io_unit_size": 131072, 00:07:44.320 "max_aq_depth": 128, 00:07:44.320 "num_shared_buffers": 511, 00:07:44.320 "buf_cache_size": 4294967295, 00:07:44.320 "dif_insert_or_strip": false, 00:07:44.320 "zcopy": false, 00:07:44.320 "c2h_success": true, 00:07:44.320 "sock_priority": 0, 00:07:44.320 "abort_timeout_sec": 1, 00:07:44.320 "ack_timeout": 0, 00:07:44.320 "data_wr_pool_size": 0 00:07:44.320 } 00:07:44.320 } 00:07:44.320 ] 00:07:44.320 }, 00:07:44.320 { 00:07:44.320 "subsystem": "iscsi", 00:07:44.320 "config": [ 00:07:44.320 { 00:07:44.320 "method": "iscsi_set_options", 00:07:44.320 "params": { 00:07:44.320 "node_base": "iqn.2016-06.io.spdk", 00:07:44.320 "max_sessions": 128, 00:07:44.320 "max_connections_per_session": 2, 00:07:44.320 "max_queue_depth": 64, 00:07:44.320 "default_time2wait": 2, 00:07:44.320 "default_time2retain": 20, 00:07:44.320 "first_burst_length": 8192, 00:07:44.320 "immediate_data": true, 00:07:44.320 "allow_duplicated_isid": false, 00:07:44.320 "error_recovery_level": 0, 00:07:44.320 "nop_timeout": 60, 00:07:44.320 "nop_in_interval": 30, 00:07:44.320 "disable_chap": false, 00:07:44.320 "require_chap": false, 00:07:44.320 "mutual_chap": false, 00:07:44.320 "chap_group": 0, 00:07:44.320 "max_large_datain_per_connection": 64, 00:07:44.320 "max_r2t_per_connection": 4, 00:07:44.320 "pdu_pool_size": 36864, 00:07:44.320 "immediate_data_pool_size": 16384, 00:07:44.320 "data_out_pool_size": 2048 00:07:44.320 } 00:07:44.320 } 00:07:44.320 ] 00:07:44.320 } 00:07:44.320 ] 00:07:44.320 } 00:07:44.320 18:05:54 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@39 -- # trap - SIGINT SIGTERM EXIT 00:07:44.320 18:05:54 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@40 -- # killprocess 58131 00:07:44.320 18:05:54 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@954 -- # '[' -z 58131 ']' 00:07:44.320 18:05:54 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@958 -- # kill -0 58131 00:07:44.320 18:05:54 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@959 -- # uname 00:07:44.320 18:05:54 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:07:44.320 18:05:54 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 58131 00:07:44.320 killing process with pid 58131 00:07:44.320 18:05:54 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:07:44.320 18:05:54 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:07:44.320 18:05:54 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@972 -- # echo 'killing process with pid 58131' 00:07:44.320 18:05:54 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@973 -- # kill 58131 00:07:44.320 18:05:54 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@978 -- # wait 58131 00:07:46.849 18:05:57 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@47 -- # local spdk_pid=58187 00:07:46.849 18:05:57 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@48 -- # sleep 5 00:07:46.849 18:05:57 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@46 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 --json /home/vagrant/spdk_repo/spdk/test/rpc/config.json 00:07:52.118 18:06:02 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@50 -- # killprocess 58187 00:07:52.118 18:06:02 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@954 -- # '[' -z 58187 ']' 00:07:52.118 18:06:02 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@958 -- # kill -0 58187 00:07:52.118 18:06:02 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@959 -- # uname 00:07:52.118 18:06:02 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:07:52.118 18:06:02 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 58187 00:07:52.118 killing process with pid 58187 00:07:52.118 18:06:02 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:07:52.118 18:06:02 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:07:52.118 18:06:02 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@972 -- # echo 'killing process with pid 58187' 00:07:52.118 18:06:02 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@973 -- # kill 58187 00:07:52.118 18:06:02 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@978 -- # wait 58187 00:07:54.651 18:06:04 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@51 -- # grep -q 'TCP Transport Init' /home/vagrant/spdk_repo/spdk/test/rpc/log.txt 00:07:54.651 18:06:04 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@52 -- # rm /home/vagrant/spdk_repo/spdk/test/rpc/log.txt 00:07:54.651 ************************************ 00:07:54.651 END TEST skip_rpc_with_json 00:07:54.651 ************************************ 00:07:54.651 00:07:54.651 real 0m11.480s 00:07:54.651 user 0m10.904s 00:07:54.651 sys 0m0.913s 00:07:54.651 18:06:04 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@1130 -- # xtrace_disable 00:07:54.651 18:06:04 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x 00:07:54.651 18:06:04 skip_rpc -- rpc/skip_rpc.sh@75 -- # run_test skip_rpc_with_delay test_skip_rpc_with_delay 00:07:54.651 18:06:04 skip_rpc -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:07:54.651 18:06:04 skip_rpc -- common/autotest_common.sh@1111 -- # xtrace_disable 00:07:54.651 18:06:04 skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:07:54.651 ************************************ 00:07:54.651 START TEST skip_rpc_with_delay 00:07:54.651 ************************************ 00:07:54.651 18:06:04 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@1129 -- # test_skip_rpc_with_delay 00:07:54.651 18:06:04 skip_rpc.skip_rpc_with_delay -- rpc/skip_rpc.sh@57 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 --wait-for-rpc 00:07:54.651 18:06:04 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@652 -- # local es=0 00:07:54.651 18:06:04 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@654 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 --wait-for-rpc 00:07:54.652 18:06:04 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@640 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:07:54.652 18:06:04 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:07:54.652 18:06:04 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@644 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:07:54.652 18:06:04 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:07:54.652 18:06:04 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@646 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:07:54.652 18:06:04 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:07:54.652 18:06:04 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@646 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:07:54.652 18:06:04 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@646 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt ]] 00:07:54.652 18:06:04 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@655 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 --wait-for-rpc 00:07:54.652 [2024-12-06 18:06:04.851613] app.c: 842:spdk_app_start: *ERROR*: Cannot use '--wait-for-rpc' if no RPC server is going to be started. 00:07:54.652 18:06:04 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@655 -- # es=1 00:07:54.652 18:06:04 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:07:54.652 18:06:04 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:07:54.652 18:06:04 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:07:54.652 00:07:54.652 real 0m0.182s 00:07:54.652 user 0m0.087s 00:07:54.652 sys 0m0.090s 00:07:54.652 18:06:04 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@1130 -- # xtrace_disable 00:07:54.652 ************************************ 00:07:54.652 END TEST skip_rpc_with_delay 00:07:54.652 ************************************ 00:07:54.652 18:06:04 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@10 -- # set +x 00:07:54.652 18:06:04 skip_rpc -- rpc/skip_rpc.sh@77 -- # uname 00:07:54.652 18:06:04 skip_rpc -- rpc/skip_rpc.sh@77 -- # '[' Linux '!=' FreeBSD ']' 00:07:54.652 18:06:04 skip_rpc -- rpc/skip_rpc.sh@78 -- # run_test exit_on_failed_rpc_init test_exit_on_failed_rpc_init 00:07:54.652 18:06:04 skip_rpc -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:07:54.652 18:06:04 skip_rpc -- common/autotest_common.sh@1111 -- # xtrace_disable 00:07:54.652 18:06:04 skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:07:54.652 ************************************ 00:07:54.652 START TEST exit_on_failed_rpc_init 00:07:54.652 ************************************ 00:07:54.652 18:06:04 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@1129 -- # test_exit_on_failed_rpc_init 00:07:54.652 18:06:04 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@62 -- # local spdk_pid=58326 00:07:54.652 18:06:04 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@61 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 00:07:54.652 18:06:04 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@63 -- # waitforlisten 58326 00:07:54.652 18:06:04 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@835 -- # '[' -z 58326 ']' 00:07:54.652 18:06:04 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:54.652 18:06:04 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@840 -- # local max_retries=100 00:07:54.652 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:54.652 18:06:04 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:54.652 18:06:04 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@844 -- # xtrace_disable 00:07:54.652 18:06:04 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@10 -- # set +x 00:07:54.652 [2024-12-06 18:06:05.097829] Starting SPDK v25.01-pre git sha1 1148849d6 / DPDK 24.03.0 initialization... 00:07:54.652 [2024-12-06 18:06:05.097955] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid58326 ] 00:07:54.911 [2024-12-06 18:06:05.276500] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:54.911 [2024-12-06 18:06:05.381669] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:07:55.846 18:06:06 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:07:55.846 18:06:06 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@868 -- # return 0 00:07:55.846 18:06:06 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@65 -- # trap 'killprocess $spdk_pid; exit 1' SIGINT SIGTERM EXIT 00:07:55.846 18:06:06 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@67 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x2 00:07:55.846 18:06:06 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@652 -- # local es=0 00:07:55.846 18:06:06 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@654 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x2 00:07:55.846 18:06:06 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@640 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:07:55.846 18:06:06 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:07:55.846 18:06:06 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@644 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:07:55.846 18:06:06 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:07:55.846 18:06:06 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@646 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:07:55.846 18:06:06 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:07:55.846 18:06:06 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@646 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:07:55.846 18:06:06 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@646 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt ]] 00:07:55.846 18:06:06 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@655 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x2 00:07:55.846 [2024-12-06 18:06:06.361451] Starting SPDK v25.01-pre git sha1 1148849d6 / DPDK 24.03.0 initialization... 00:07:55.846 [2024-12-06 18:06:06.361572] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid58344 ] 00:07:56.105 [2024-12-06 18:06:06.543213] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:56.105 [2024-12-06 18:06:06.663881] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:07:56.105 [2024-12-06 18:06:06.663979] rpc.c: 180:_spdk_rpc_listen: *ERROR*: RPC Unix domain socket path /var/tmp/spdk.sock in use. Specify another. 00:07:56.105 [2024-12-06 18:06:06.663996] rpc.c: 166:spdk_rpc_initialize: *ERROR*: Unable to start RPC service at /var/tmp/spdk.sock 00:07:56.105 [2024-12-06 18:06:06.664015] app.c:1064:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:07:56.365 18:06:06 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@655 -- # es=234 00:07:56.365 18:06:06 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:07:56.365 18:06:06 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@664 -- # es=106 00:07:56.365 18:06:06 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@665 -- # case "$es" in 00:07:56.365 18:06:06 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@672 -- # es=1 00:07:56.365 18:06:06 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:07:56.365 18:06:06 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@69 -- # trap - SIGINT SIGTERM EXIT 00:07:56.365 18:06:06 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@70 -- # killprocess 58326 00:07:56.365 18:06:06 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@954 -- # '[' -z 58326 ']' 00:07:56.365 18:06:06 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@958 -- # kill -0 58326 00:07:56.365 18:06:06 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@959 -- # uname 00:07:56.365 18:06:06 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:07:56.625 18:06:06 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 58326 00:07:56.625 18:06:06 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:07:56.625 killing process with pid 58326 00:07:56.625 18:06:06 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:07:56.625 18:06:06 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@972 -- # echo 'killing process with pid 58326' 00:07:56.625 18:06:06 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@973 -- # kill 58326 00:07:56.625 18:06:06 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@978 -- # wait 58326 00:07:59.158 00:07:59.158 real 0m4.409s 00:07:59.158 user 0m4.755s 00:07:59.158 sys 0m0.619s 00:07:59.158 18:06:09 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@1130 -- # xtrace_disable 00:07:59.158 18:06:09 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@10 -- # set +x 00:07:59.158 ************************************ 00:07:59.158 END TEST exit_on_failed_rpc_init 00:07:59.158 ************************************ 00:07:59.158 18:06:09 skip_rpc -- rpc/skip_rpc.sh@81 -- # rm /home/vagrant/spdk_repo/spdk/test/rpc/config.json 00:07:59.158 ************************************ 00:07:59.158 END TEST skip_rpc 00:07:59.158 00:07:59.158 real 0m24.037s 00:07:59.158 user 0m22.908s 00:07:59.158 sys 0m2.342s 00:07:59.158 18:06:09 skip_rpc -- common/autotest_common.sh@1130 -- # xtrace_disable 00:07:59.158 18:06:09 skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:07:59.158 ************************************ 00:07:59.158 18:06:09 -- spdk/autotest.sh@158 -- # run_test rpc_client /home/vagrant/spdk_repo/spdk/test/rpc_client/rpc_client.sh 00:07:59.158 18:06:09 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:07:59.158 18:06:09 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:07:59.158 18:06:09 -- common/autotest_common.sh@10 -- # set +x 00:07:59.158 ************************************ 00:07:59.158 START TEST rpc_client 00:07:59.158 ************************************ 00:07:59.158 18:06:09 rpc_client -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/rpc_client/rpc_client.sh 00:07:59.158 * Looking for test storage... 00:07:59.158 * Found test storage at /home/vagrant/spdk_repo/spdk/test/rpc_client 00:07:59.158 18:06:09 rpc_client -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:07:59.158 18:06:09 rpc_client -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:07:59.158 18:06:09 rpc_client -- common/autotest_common.sh@1711 -- # lcov --version 00:07:59.158 18:06:09 rpc_client -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:07:59.158 18:06:09 rpc_client -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:07:59.158 18:06:09 rpc_client -- scripts/common.sh@333 -- # local ver1 ver1_l 00:07:59.158 18:06:09 rpc_client -- scripts/common.sh@334 -- # local ver2 ver2_l 00:07:59.158 18:06:09 rpc_client -- scripts/common.sh@336 -- # IFS=.-: 00:07:59.158 18:06:09 rpc_client -- scripts/common.sh@336 -- # read -ra ver1 00:07:59.158 18:06:09 rpc_client -- scripts/common.sh@337 -- # IFS=.-: 00:07:59.158 18:06:09 rpc_client -- scripts/common.sh@337 -- # read -ra ver2 00:07:59.158 18:06:09 rpc_client -- scripts/common.sh@338 -- # local 'op=<' 00:07:59.158 18:06:09 rpc_client -- scripts/common.sh@340 -- # ver1_l=2 00:07:59.158 18:06:09 rpc_client -- scripts/common.sh@341 -- # ver2_l=1 00:07:59.158 18:06:09 rpc_client -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:07:59.158 18:06:09 rpc_client -- scripts/common.sh@344 -- # case "$op" in 00:07:59.158 18:06:09 rpc_client -- scripts/common.sh@345 -- # : 1 00:07:59.158 18:06:09 rpc_client -- scripts/common.sh@364 -- # (( v = 0 )) 00:07:59.158 18:06:09 rpc_client -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:07:59.158 18:06:09 rpc_client -- scripts/common.sh@365 -- # decimal 1 00:07:59.158 18:06:09 rpc_client -- scripts/common.sh@353 -- # local d=1 00:07:59.158 18:06:09 rpc_client -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:07:59.158 18:06:09 rpc_client -- scripts/common.sh@355 -- # echo 1 00:07:59.158 18:06:09 rpc_client -- scripts/common.sh@365 -- # ver1[v]=1 00:07:59.158 18:06:09 rpc_client -- scripts/common.sh@366 -- # decimal 2 00:07:59.158 18:06:09 rpc_client -- scripts/common.sh@353 -- # local d=2 00:07:59.158 18:06:09 rpc_client -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:07:59.158 18:06:09 rpc_client -- scripts/common.sh@355 -- # echo 2 00:07:59.158 18:06:09 rpc_client -- scripts/common.sh@366 -- # ver2[v]=2 00:07:59.158 18:06:09 rpc_client -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:07:59.158 18:06:09 rpc_client -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:07:59.158 18:06:09 rpc_client -- scripts/common.sh@368 -- # return 0 00:07:59.158 18:06:09 rpc_client -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:07:59.158 18:06:09 rpc_client -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:07:59.158 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:59.158 --rc genhtml_branch_coverage=1 00:07:59.158 --rc genhtml_function_coverage=1 00:07:59.158 --rc genhtml_legend=1 00:07:59.158 --rc geninfo_all_blocks=1 00:07:59.158 --rc geninfo_unexecuted_blocks=1 00:07:59.158 00:07:59.158 ' 00:07:59.158 18:06:09 rpc_client -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:07:59.158 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:59.158 --rc genhtml_branch_coverage=1 00:07:59.158 --rc genhtml_function_coverage=1 00:07:59.158 --rc genhtml_legend=1 00:07:59.158 --rc geninfo_all_blocks=1 00:07:59.158 --rc geninfo_unexecuted_blocks=1 00:07:59.158 00:07:59.158 ' 00:07:59.158 18:06:09 rpc_client -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:07:59.158 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:59.158 --rc genhtml_branch_coverage=1 00:07:59.158 --rc genhtml_function_coverage=1 00:07:59.158 --rc genhtml_legend=1 00:07:59.158 --rc geninfo_all_blocks=1 00:07:59.158 --rc geninfo_unexecuted_blocks=1 00:07:59.158 00:07:59.158 ' 00:07:59.158 18:06:09 rpc_client -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:07:59.158 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:59.158 --rc genhtml_branch_coverage=1 00:07:59.158 --rc genhtml_function_coverage=1 00:07:59.158 --rc genhtml_legend=1 00:07:59.158 --rc geninfo_all_blocks=1 00:07:59.158 --rc geninfo_unexecuted_blocks=1 00:07:59.158 00:07:59.158 ' 00:07:59.158 18:06:09 rpc_client -- rpc_client/rpc_client.sh@10 -- # /home/vagrant/spdk_repo/spdk/test/rpc_client/rpc_client_test 00:07:59.416 OK 00:07:59.416 18:06:09 rpc_client -- rpc_client/rpc_client.sh@12 -- # trap - SIGINT SIGTERM EXIT 00:07:59.416 00:07:59.416 real 0m0.289s 00:07:59.416 user 0m0.153s 00:07:59.416 sys 0m0.150s 00:07:59.416 18:06:09 rpc_client -- common/autotest_common.sh@1130 -- # xtrace_disable 00:07:59.416 18:06:09 rpc_client -- common/autotest_common.sh@10 -- # set +x 00:07:59.416 ************************************ 00:07:59.416 END TEST rpc_client 00:07:59.416 ************************************ 00:07:59.416 18:06:09 -- spdk/autotest.sh@159 -- # run_test json_config /home/vagrant/spdk_repo/spdk/test/json_config/json_config.sh 00:07:59.416 18:06:09 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:07:59.416 18:06:09 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:07:59.416 18:06:09 -- common/autotest_common.sh@10 -- # set +x 00:07:59.416 ************************************ 00:07:59.416 START TEST json_config 00:07:59.416 ************************************ 00:07:59.416 18:06:09 json_config -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/json_config/json_config.sh 00:07:59.416 18:06:09 json_config -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:07:59.416 18:06:09 json_config -- common/autotest_common.sh@1711 -- # lcov --version 00:07:59.416 18:06:09 json_config -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:07:59.674 18:06:10 json_config -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:07:59.674 18:06:10 json_config -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:07:59.674 18:06:10 json_config -- scripts/common.sh@333 -- # local ver1 ver1_l 00:07:59.674 18:06:10 json_config -- scripts/common.sh@334 -- # local ver2 ver2_l 00:07:59.674 18:06:10 json_config -- scripts/common.sh@336 -- # IFS=.-: 00:07:59.674 18:06:10 json_config -- scripts/common.sh@336 -- # read -ra ver1 00:07:59.674 18:06:10 json_config -- scripts/common.sh@337 -- # IFS=.-: 00:07:59.674 18:06:10 json_config -- scripts/common.sh@337 -- # read -ra ver2 00:07:59.674 18:06:10 json_config -- scripts/common.sh@338 -- # local 'op=<' 00:07:59.674 18:06:10 json_config -- scripts/common.sh@340 -- # ver1_l=2 00:07:59.674 18:06:10 json_config -- scripts/common.sh@341 -- # ver2_l=1 00:07:59.674 18:06:10 json_config -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:07:59.674 18:06:10 json_config -- scripts/common.sh@344 -- # case "$op" in 00:07:59.674 18:06:10 json_config -- scripts/common.sh@345 -- # : 1 00:07:59.674 18:06:10 json_config -- scripts/common.sh@364 -- # (( v = 0 )) 00:07:59.674 18:06:10 json_config -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:07:59.674 18:06:10 json_config -- scripts/common.sh@365 -- # decimal 1 00:07:59.674 18:06:10 json_config -- scripts/common.sh@353 -- # local d=1 00:07:59.674 18:06:10 json_config -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:07:59.674 18:06:10 json_config -- scripts/common.sh@355 -- # echo 1 00:07:59.674 18:06:10 json_config -- scripts/common.sh@365 -- # ver1[v]=1 00:07:59.674 18:06:10 json_config -- scripts/common.sh@366 -- # decimal 2 00:07:59.674 18:06:10 json_config -- scripts/common.sh@353 -- # local d=2 00:07:59.674 18:06:10 json_config -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:07:59.674 18:06:10 json_config -- scripts/common.sh@355 -- # echo 2 00:07:59.674 18:06:10 json_config -- scripts/common.sh@366 -- # ver2[v]=2 00:07:59.674 18:06:10 json_config -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:07:59.674 18:06:10 json_config -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:07:59.674 18:06:10 json_config -- scripts/common.sh@368 -- # return 0 00:07:59.674 18:06:10 json_config -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:07:59.674 18:06:10 json_config -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:07:59.674 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:59.675 --rc genhtml_branch_coverage=1 00:07:59.675 --rc genhtml_function_coverage=1 00:07:59.675 --rc genhtml_legend=1 00:07:59.675 --rc geninfo_all_blocks=1 00:07:59.675 --rc geninfo_unexecuted_blocks=1 00:07:59.675 00:07:59.675 ' 00:07:59.675 18:06:10 json_config -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:07:59.675 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:59.675 --rc genhtml_branch_coverage=1 00:07:59.675 --rc genhtml_function_coverage=1 00:07:59.675 --rc genhtml_legend=1 00:07:59.675 --rc geninfo_all_blocks=1 00:07:59.675 --rc geninfo_unexecuted_blocks=1 00:07:59.675 00:07:59.675 ' 00:07:59.675 18:06:10 json_config -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:07:59.675 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:59.675 --rc genhtml_branch_coverage=1 00:07:59.675 --rc genhtml_function_coverage=1 00:07:59.675 --rc genhtml_legend=1 00:07:59.675 --rc geninfo_all_blocks=1 00:07:59.675 --rc geninfo_unexecuted_blocks=1 00:07:59.675 00:07:59.675 ' 00:07:59.675 18:06:10 json_config -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:07:59.675 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:59.675 --rc genhtml_branch_coverage=1 00:07:59.675 --rc genhtml_function_coverage=1 00:07:59.675 --rc genhtml_legend=1 00:07:59.675 --rc geninfo_all_blocks=1 00:07:59.675 --rc geninfo_unexecuted_blocks=1 00:07:59.675 00:07:59.675 ' 00:07:59.675 18:06:10 json_config -- json_config/json_config.sh@8 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:07:59.675 18:06:10 json_config -- nvmf/common.sh@7 -- # uname -s 00:07:59.675 18:06:10 json_config -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:07:59.675 18:06:10 json_config -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:07:59.675 18:06:10 json_config -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:07:59.675 18:06:10 json_config -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:07:59.675 18:06:10 json_config -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:07:59.675 18:06:10 json_config -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:07:59.675 18:06:10 json_config -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:07:59.675 18:06:10 json_config -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:07:59.675 18:06:10 json_config -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:07:59.675 18:06:10 json_config -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:07:59.675 18:06:10 json_config -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:8fbcdf99-1d6c-4dcf-8c56-70f8c7f05438 00:07:59.675 18:06:10 json_config -- nvmf/common.sh@18 -- # NVME_HOSTID=8fbcdf99-1d6c-4dcf-8c56-70f8c7f05438 00:07:59.675 18:06:10 json_config -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:07:59.675 18:06:10 json_config -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:07:59.675 18:06:10 json_config -- nvmf/common.sh@21 -- # NET_TYPE=phy-fallback 00:07:59.675 18:06:10 json_config -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:07:59.675 18:06:10 json_config -- nvmf/common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:07:59.675 18:06:10 json_config -- scripts/common.sh@15 -- # shopt -s extglob 00:07:59.675 18:06:10 json_config -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:07:59.675 18:06:10 json_config -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:07:59.675 18:06:10 json_config -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:07:59.675 18:06:10 json_config -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:59.675 18:06:10 json_config -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:59.675 18:06:10 json_config -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:59.675 18:06:10 json_config -- paths/export.sh@5 -- # export PATH 00:07:59.675 18:06:10 json_config -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:59.675 18:06:10 json_config -- nvmf/common.sh@51 -- # : 0 00:07:59.675 18:06:10 json_config -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:07:59.675 18:06:10 json_config -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:07:59.675 18:06:10 json_config -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:07:59.675 18:06:10 json_config -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:07:59.675 18:06:10 json_config -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:07:59.675 18:06:10 json_config -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:07:59.675 /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:07:59.675 18:06:10 json_config -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:07:59.675 18:06:10 json_config -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:07:59.675 18:06:10 json_config -- nvmf/common.sh@55 -- # have_pci_nics=0 00:07:59.675 18:06:10 json_config -- json_config/json_config.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/json_config/common.sh 00:07:59.675 18:06:10 json_config -- json_config/json_config.sh@11 -- # [[ 0 -eq 1 ]] 00:07:59.675 18:06:10 json_config -- json_config/json_config.sh@15 -- # [[ 0 -ne 1 ]] 00:07:59.675 18:06:10 json_config -- json_config/json_config.sh@15 -- # [[ 0 -eq 1 ]] 00:07:59.675 18:06:10 json_config -- json_config/json_config.sh@26 -- # (( SPDK_TEST_BLOCKDEV + SPDK_TEST_ISCSI + SPDK_TEST_NVMF + SPDK_TEST_VHOST + SPDK_TEST_VHOST_INIT + SPDK_TEST_RBD == 0 )) 00:07:59.675 18:06:10 json_config -- json_config/json_config.sh@27 -- # echo 'WARNING: No tests are enabled so not running JSON configuration tests' 00:07:59.675 WARNING: No tests are enabled so not running JSON configuration tests 00:07:59.675 18:06:10 json_config -- json_config/json_config.sh@28 -- # exit 0 00:07:59.675 00:07:59.675 real 0m0.176s 00:07:59.675 user 0m0.099s 00:07:59.675 sys 0m0.084s 00:07:59.675 18:06:10 json_config -- common/autotest_common.sh@1130 -- # xtrace_disable 00:07:59.675 18:06:10 json_config -- common/autotest_common.sh@10 -- # set +x 00:07:59.675 ************************************ 00:07:59.675 END TEST json_config 00:07:59.675 ************************************ 00:07:59.675 18:06:10 -- spdk/autotest.sh@160 -- # run_test json_config_extra_key /home/vagrant/spdk_repo/spdk/test/json_config/json_config_extra_key.sh 00:07:59.675 18:06:10 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:07:59.675 18:06:10 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:07:59.675 18:06:10 -- common/autotest_common.sh@10 -- # set +x 00:07:59.675 ************************************ 00:07:59.675 START TEST json_config_extra_key 00:07:59.675 ************************************ 00:07:59.675 18:06:10 json_config_extra_key -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/json_config/json_config_extra_key.sh 00:07:59.675 18:06:10 json_config_extra_key -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:07:59.675 18:06:10 json_config_extra_key -- common/autotest_common.sh@1711 -- # lcov --version 00:07:59.675 18:06:10 json_config_extra_key -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:07:59.969 18:06:10 json_config_extra_key -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:07:59.969 18:06:10 json_config_extra_key -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:07:59.969 18:06:10 json_config_extra_key -- scripts/common.sh@333 -- # local ver1 ver1_l 00:07:59.969 18:06:10 json_config_extra_key -- scripts/common.sh@334 -- # local ver2 ver2_l 00:07:59.969 18:06:10 json_config_extra_key -- scripts/common.sh@336 -- # IFS=.-: 00:07:59.969 18:06:10 json_config_extra_key -- scripts/common.sh@336 -- # read -ra ver1 00:07:59.969 18:06:10 json_config_extra_key -- scripts/common.sh@337 -- # IFS=.-: 00:07:59.969 18:06:10 json_config_extra_key -- scripts/common.sh@337 -- # read -ra ver2 00:07:59.969 18:06:10 json_config_extra_key -- scripts/common.sh@338 -- # local 'op=<' 00:07:59.969 18:06:10 json_config_extra_key -- scripts/common.sh@340 -- # ver1_l=2 00:07:59.969 18:06:10 json_config_extra_key -- scripts/common.sh@341 -- # ver2_l=1 00:07:59.969 18:06:10 json_config_extra_key -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:07:59.969 18:06:10 json_config_extra_key -- scripts/common.sh@344 -- # case "$op" in 00:07:59.969 18:06:10 json_config_extra_key -- scripts/common.sh@345 -- # : 1 00:07:59.969 18:06:10 json_config_extra_key -- scripts/common.sh@364 -- # (( v = 0 )) 00:07:59.969 18:06:10 json_config_extra_key -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:07:59.969 18:06:10 json_config_extra_key -- scripts/common.sh@365 -- # decimal 1 00:07:59.969 18:06:10 json_config_extra_key -- scripts/common.sh@353 -- # local d=1 00:07:59.969 18:06:10 json_config_extra_key -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:07:59.969 18:06:10 json_config_extra_key -- scripts/common.sh@355 -- # echo 1 00:07:59.969 18:06:10 json_config_extra_key -- scripts/common.sh@365 -- # ver1[v]=1 00:07:59.969 18:06:10 json_config_extra_key -- scripts/common.sh@366 -- # decimal 2 00:07:59.969 18:06:10 json_config_extra_key -- scripts/common.sh@353 -- # local d=2 00:07:59.969 18:06:10 json_config_extra_key -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:07:59.969 18:06:10 json_config_extra_key -- scripts/common.sh@355 -- # echo 2 00:07:59.969 18:06:10 json_config_extra_key -- scripts/common.sh@366 -- # ver2[v]=2 00:07:59.969 18:06:10 json_config_extra_key -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:07:59.969 18:06:10 json_config_extra_key -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:07:59.969 18:06:10 json_config_extra_key -- scripts/common.sh@368 -- # return 0 00:07:59.969 18:06:10 json_config_extra_key -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:07:59.969 18:06:10 json_config_extra_key -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:07:59.969 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:59.969 --rc genhtml_branch_coverage=1 00:07:59.969 --rc genhtml_function_coverage=1 00:07:59.969 --rc genhtml_legend=1 00:07:59.969 --rc geninfo_all_blocks=1 00:07:59.969 --rc geninfo_unexecuted_blocks=1 00:07:59.969 00:07:59.969 ' 00:07:59.969 18:06:10 json_config_extra_key -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:07:59.969 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:59.969 --rc genhtml_branch_coverage=1 00:07:59.969 --rc genhtml_function_coverage=1 00:07:59.969 --rc genhtml_legend=1 00:07:59.969 --rc geninfo_all_blocks=1 00:07:59.969 --rc geninfo_unexecuted_blocks=1 00:07:59.969 00:07:59.969 ' 00:07:59.970 18:06:10 json_config_extra_key -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:07:59.970 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:59.970 --rc genhtml_branch_coverage=1 00:07:59.970 --rc genhtml_function_coverage=1 00:07:59.970 --rc genhtml_legend=1 00:07:59.970 --rc geninfo_all_blocks=1 00:07:59.970 --rc geninfo_unexecuted_blocks=1 00:07:59.970 00:07:59.970 ' 00:07:59.970 18:06:10 json_config_extra_key -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:07:59.970 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:59.970 --rc genhtml_branch_coverage=1 00:07:59.970 --rc genhtml_function_coverage=1 00:07:59.970 --rc genhtml_legend=1 00:07:59.970 --rc geninfo_all_blocks=1 00:07:59.970 --rc geninfo_unexecuted_blocks=1 00:07:59.970 00:07:59.970 ' 00:07:59.970 18:06:10 json_config_extra_key -- json_config/json_config_extra_key.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:07:59.970 18:06:10 json_config_extra_key -- nvmf/common.sh@7 -- # uname -s 00:07:59.970 18:06:10 json_config_extra_key -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:07:59.970 18:06:10 json_config_extra_key -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:07:59.970 18:06:10 json_config_extra_key -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:07:59.970 18:06:10 json_config_extra_key -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:07:59.970 18:06:10 json_config_extra_key -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:07:59.970 18:06:10 json_config_extra_key -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:07:59.970 18:06:10 json_config_extra_key -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:07:59.970 18:06:10 json_config_extra_key -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:07:59.970 18:06:10 json_config_extra_key -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:07:59.970 18:06:10 json_config_extra_key -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:07:59.970 18:06:10 json_config_extra_key -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:8fbcdf99-1d6c-4dcf-8c56-70f8c7f05438 00:07:59.970 18:06:10 json_config_extra_key -- nvmf/common.sh@18 -- # NVME_HOSTID=8fbcdf99-1d6c-4dcf-8c56-70f8c7f05438 00:07:59.970 18:06:10 json_config_extra_key -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:07:59.970 18:06:10 json_config_extra_key -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:07:59.970 18:06:10 json_config_extra_key -- nvmf/common.sh@21 -- # NET_TYPE=phy-fallback 00:07:59.970 18:06:10 json_config_extra_key -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:07:59.970 18:06:10 json_config_extra_key -- nvmf/common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:07:59.970 18:06:10 json_config_extra_key -- scripts/common.sh@15 -- # shopt -s extglob 00:07:59.970 18:06:10 json_config_extra_key -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:07:59.970 18:06:10 json_config_extra_key -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:07:59.970 18:06:10 json_config_extra_key -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:07:59.970 18:06:10 json_config_extra_key -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:59.970 18:06:10 json_config_extra_key -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:59.970 18:06:10 json_config_extra_key -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:59.970 18:06:10 json_config_extra_key -- paths/export.sh@5 -- # export PATH 00:07:59.970 18:06:10 json_config_extra_key -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:59.970 18:06:10 json_config_extra_key -- nvmf/common.sh@51 -- # : 0 00:07:59.970 18:06:10 json_config_extra_key -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:07:59.970 18:06:10 json_config_extra_key -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:07:59.970 18:06:10 json_config_extra_key -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:07:59.970 18:06:10 json_config_extra_key -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:07:59.970 18:06:10 json_config_extra_key -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:07:59.970 18:06:10 json_config_extra_key -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:07:59.970 /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:07:59.970 18:06:10 json_config_extra_key -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:07:59.970 18:06:10 json_config_extra_key -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:07:59.970 18:06:10 json_config_extra_key -- nvmf/common.sh@55 -- # have_pci_nics=0 00:07:59.970 18:06:10 json_config_extra_key -- json_config/json_config_extra_key.sh@10 -- # source /home/vagrant/spdk_repo/spdk/test/json_config/common.sh 00:07:59.970 18:06:10 json_config_extra_key -- json_config/json_config_extra_key.sh@17 -- # app_pid=(['target']='') 00:07:59.970 18:06:10 json_config_extra_key -- json_config/json_config_extra_key.sh@17 -- # declare -A app_pid 00:07:59.970 INFO: launching applications... 00:07:59.970 Waiting for target to run... 00:07:59.970 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock... 00:07:59.970 18:06:10 json_config_extra_key -- json_config/json_config_extra_key.sh@18 -- # app_socket=(['target']='/var/tmp/spdk_tgt.sock') 00:07:59.970 18:06:10 json_config_extra_key -- json_config/json_config_extra_key.sh@18 -- # declare -A app_socket 00:07:59.970 18:06:10 json_config_extra_key -- json_config/json_config_extra_key.sh@19 -- # app_params=(['target']='-m 0x1 -s 1024') 00:07:59.970 18:06:10 json_config_extra_key -- json_config/json_config_extra_key.sh@19 -- # declare -A app_params 00:07:59.970 18:06:10 json_config_extra_key -- json_config/json_config_extra_key.sh@20 -- # configs_path=(['target']='/home/vagrant/spdk_repo/spdk/test/json_config/extra_key.json') 00:07:59.970 18:06:10 json_config_extra_key -- json_config/json_config_extra_key.sh@20 -- # declare -A configs_path 00:07:59.970 18:06:10 json_config_extra_key -- json_config/json_config_extra_key.sh@22 -- # trap 'on_error_exit "${FUNCNAME}" "${LINENO}"' ERR 00:07:59.970 18:06:10 json_config_extra_key -- json_config/json_config_extra_key.sh@24 -- # echo 'INFO: launching applications...' 00:07:59.970 18:06:10 json_config_extra_key -- json_config/json_config_extra_key.sh@25 -- # json_config_test_start_app target --json /home/vagrant/spdk_repo/spdk/test/json_config/extra_key.json 00:07:59.970 18:06:10 json_config_extra_key -- json_config/common.sh@9 -- # local app=target 00:07:59.970 18:06:10 json_config_extra_key -- json_config/common.sh@10 -- # shift 00:07:59.970 18:06:10 json_config_extra_key -- json_config/common.sh@12 -- # [[ -n 22 ]] 00:07:59.970 18:06:10 json_config_extra_key -- json_config/common.sh@13 -- # [[ -z '' ]] 00:07:59.970 18:06:10 json_config_extra_key -- json_config/common.sh@15 -- # local app_extra_params= 00:07:59.970 18:06:10 json_config_extra_key -- json_config/common.sh@16 -- # [[ 0 -eq 1 ]] 00:07:59.970 18:06:10 json_config_extra_key -- json_config/common.sh@16 -- # [[ 0 -eq 1 ]] 00:07:59.970 18:06:10 json_config_extra_key -- json_config/common.sh@22 -- # app_pid["$app"]=58554 00:07:59.970 18:06:10 json_config_extra_key -- json_config/common.sh@21 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 -s 1024 -r /var/tmp/spdk_tgt.sock --json /home/vagrant/spdk_repo/spdk/test/json_config/extra_key.json 00:07:59.970 18:06:10 json_config_extra_key -- json_config/common.sh@24 -- # echo 'Waiting for target to run...' 00:07:59.970 18:06:10 json_config_extra_key -- json_config/common.sh@25 -- # waitforlisten 58554 /var/tmp/spdk_tgt.sock 00:07:59.970 18:06:10 json_config_extra_key -- common/autotest_common.sh@835 -- # '[' -z 58554 ']' 00:07:59.970 18:06:10 json_config_extra_key -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk_tgt.sock 00:07:59.970 18:06:10 json_config_extra_key -- common/autotest_common.sh@840 -- # local max_retries=100 00:07:59.970 18:06:10 json_config_extra_key -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock...' 00:07:59.970 18:06:10 json_config_extra_key -- common/autotest_common.sh@844 -- # xtrace_disable 00:07:59.970 18:06:10 json_config_extra_key -- common/autotest_common.sh@10 -- # set +x 00:07:59.970 [2024-12-06 18:06:10.446162] Starting SPDK v25.01-pre git sha1 1148849d6 / DPDK 24.03.0 initialization... 00:07:59.970 [2024-12-06 18:06:10.446483] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 -m 1024 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid58554 ] 00:08:00.536 [2024-12-06 18:06:10.863792] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:00.536 [2024-12-06 18:06:10.973082] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:08:01.474 18:06:11 json_config_extra_key -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:08:01.474 18:06:11 json_config_extra_key -- common/autotest_common.sh@868 -- # return 0 00:08:01.474 18:06:11 json_config_extra_key -- json_config/common.sh@26 -- # echo '' 00:08:01.474 00:08:01.474 18:06:11 json_config_extra_key -- json_config/json_config_extra_key.sh@27 -- # echo 'INFO: shutting down applications...' 00:08:01.474 INFO: shutting down applications... 00:08:01.474 18:06:11 json_config_extra_key -- json_config/json_config_extra_key.sh@28 -- # json_config_test_shutdown_app target 00:08:01.474 18:06:11 json_config_extra_key -- json_config/common.sh@31 -- # local app=target 00:08:01.474 18:06:11 json_config_extra_key -- json_config/common.sh@34 -- # [[ -n 22 ]] 00:08:01.474 18:06:11 json_config_extra_key -- json_config/common.sh@35 -- # [[ -n 58554 ]] 00:08:01.474 18:06:11 json_config_extra_key -- json_config/common.sh@38 -- # kill -SIGINT 58554 00:08:01.474 18:06:11 json_config_extra_key -- json_config/common.sh@40 -- # (( i = 0 )) 00:08:01.474 18:06:11 json_config_extra_key -- json_config/common.sh@40 -- # (( i < 30 )) 00:08:01.474 18:06:11 json_config_extra_key -- json_config/common.sh@41 -- # kill -0 58554 00:08:01.474 18:06:11 json_config_extra_key -- json_config/common.sh@45 -- # sleep 0.5 00:08:01.732 18:06:12 json_config_extra_key -- json_config/common.sh@40 -- # (( i++ )) 00:08:01.732 18:06:12 json_config_extra_key -- json_config/common.sh@40 -- # (( i < 30 )) 00:08:01.732 18:06:12 json_config_extra_key -- json_config/common.sh@41 -- # kill -0 58554 00:08:01.732 18:06:12 json_config_extra_key -- json_config/common.sh@45 -- # sleep 0.5 00:08:02.299 18:06:12 json_config_extra_key -- json_config/common.sh@40 -- # (( i++ )) 00:08:02.299 18:06:12 json_config_extra_key -- json_config/common.sh@40 -- # (( i < 30 )) 00:08:02.299 18:06:12 json_config_extra_key -- json_config/common.sh@41 -- # kill -0 58554 00:08:02.299 18:06:12 json_config_extra_key -- json_config/common.sh@45 -- # sleep 0.5 00:08:02.868 18:06:13 json_config_extra_key -- json_config/common.sh@40 -- # (( i++ )) 00:08:02.868 18:06:13 json_config_extra_key -- json_config/common.sh@40 -- # (( i < 30 )) 00:08:02.868 18:06:13 json_config_extra_key -- json_config/common.sh@41 -- # kill -0 58554 00:08:02.868 18:06:13 json_config_extra_key -- json_config/common.sh@45 -- # sleep 0.5 00:08:03.437 18:06:13 json_config_extra_key -- json_config/common.sh@40 -- # (( i++ )) 00:08:03.437 18:06:13 json_config_extra_key -- json_config/common.sh@40 -- # (( i < 30 )) 00:08:03.437 18:06:13 json_config_extra_key -- json_config/common.sh@41 -- # kill -0 58554 00:08:03.437 18:06:13 json_config_extra_key -- json_config/common.sh@45 -- # sleep 0.5 00:08:04.004 18:06:14 json_config_extra_key -- json_config/common.sh@40 -- # (( i++ )) 00:08:04.004 18:06:14 json_config_extra_key -- json_config/common.sh@40 -- # (( i < 30 )) 00:08:04.004 18:06:14 json_config_extra_key -- json_config/common.sh@41 -- # kill -0 58554 00:08:04.004 18:06:14 json_config_extra_key -- json_config/common.sh@45 -- # sleep 0.5 00:08:04.264 18:06:14 json_config_extra_key -- json_config/common.sh@40 -- # (( i++ )) 00:08:04.264 18:06:14 json_config_extra_key -- json_config/common.sh@40 -- # (( i < 30 )) 00:08:04.264 18:06:14 json_config_extra_key -- json_config/common.sh@41 -- # kill -0 58554 00:08:04.264 SPDK target shutdown done 00:08:04.264 Success 00:08:04.264 18:06:14 json_config_extra_key -- json_config/common.sh@42 -- # app_pid["$app"]= 00:08:04.264 18:06:14 json_config_extra_key -- json_config/common.sh@43 -- # break 00:08:04.264 18:06:14 json_config_extra_key -- json_config/common.sh@48 -- # [[ -n '' ]] 00:08:04.264 18:06:14 json_config_extra_key -- json_config/common.sh@53 -- # echo 'SPDK target shutdown done' 00:08:04.264 18:06:14 json_config_extra_key -- json_config/json_config_extra_key.sh@30 -- # echo Success 00:08:04.264 00:08:04.264 real 0m4.676s 00:08:04.264 user 0m4.203s 00:08:04.264 sys 0m0.595s 00:08:04.264 18:06:14 json_config_extra_key -- common/autotest_common.sh@1130 -- # xtrace_disable 00:08:04.264 18:06:14 json_config_extra_key -- common/autotest_common.sh@10 -- # set +x 00:08:04.264 ************************************ 00:08:04.264 END TEST json_config_extra_key 00:08:04.264 ************************************ 00:08:04.523 18:06:14 -- spdk/autotest.sh@161 -- # run_test alias_rpc /home/vagrant/spdk_repo/spdk/test/json_config/alias_rpc/alias_rpc.sh 00:08:04.523 18:06:14 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:08:04.523 18:06:14 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:08:04.523 18:06:14 -- common/autotest_common.sh@10 -- # set +x 00:08:04.523 ************************************ 00:08:04.523 START TEST alias_rpc 00:08:04.523 ************************************ 00:08:04.523 18:06:14 alias_rpc -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/json_config/alias_rpc/alias_rpc.sh 00:08:04.523 * Looking for test storage... 00:08:04.523 * Found test storage at /home/vagrant/spdk_repo/spdk/test/json_config/alias_rpc 00:08:04.523 18:06:14 alias_rpc -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:08:04.523 18:06:14 alias_rpc -- common/autotest_common.sh@1711 -- # lcov --version 00:08:04.523 18:06:14 alias_rpc -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:08:04.523 18:06:15 alias_rpc -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:08:04.523 18:06:15 alias_rpc -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:08:04.523 18:06:15 alias_rpc -- scripts/common.sh@333 -- # local ver1 ver1_l 00:08:04.523 18:06:15 alias_rpc -- scripts/common.sh@334 -- # local ver2 ver2_l 00:08:04.523 18:06:15 alias_rpc -- scripts/common.sh@336 -- # IFS=.-: 00:08:04.523 18:06:15 alias_rpc -- scripts/common.sh@336 -- # read -ra ver1 00:08:04.523 18:06:15 alias_rpc -- scripts/common.sh@337 -- # IFS=.-: 00:08:04.523 18:06:15 alias_rpc -- scripts/common.sh@337 -- # read -ra ver2 00:08:04.523 18:06:15 alias_rpc -- scripts/common.sh@338 -- # local 'op=<' 00:08:04.523 18:06:15 alias_rpc -- scripts/common.sh@340 -- # ver1_l=2 00:08:04.523 18:06:15 alias_rpc -- scripts/common.sh@341 -- # ver2_l=1 00:08:04.523 18:06:15 alias_rpc -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:08:04.523 18:06:15 alias_rpc -- scripts/common.sh@344 -- # case "$op" in 00:08:04.523 18:06:15 alias_rpc -- scripts/common.sh@345 -- # : 1 00:08:04.523 18:06:15 alias_rpc -- scripts/common.sh@364 -- # (( v = 0 )) 00:08:04.523 18:06:15 alias_rpc -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:08:04.523 18:06:15 alias_rpc -- scripts/common.sh@365 -- # decimal 1 00:08:04.523 18:06:15 alias_rpc -- scripts/common.sh@353 -- # local d=1 00:08:04.523 18:06:15 alias_rpc -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:08:04.523 18:06:15 alias_rpc -- scripts/common.sh@355 -- # echo 1 00:08:04.523 18:06:15 alias_rpc -- scripts/common.sh@365 -- # ver1[v]=1 00:08:04.523 18:06:15 alias_rpc -- scripts/common.sh@366 -- # decimal 2 00:08:04.523 18:06:15 alias_rpc -- scripts/common.sh@353 -- # local d=2 00:08:04.523 18:06:15 alias_rpc -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:08:04.523 18:06:15 alias_rpc -- scripts/common.sh@355 -- # echo 2 00:08:04.523 18:06:15 alias_rpc -- scripts/common.sh@366 -- # ver2[v]=2 00:08:04.523 18:06:15 alias_rpc -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:08:04.523 18:06:15 alias_rpc -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:08:04.523 18:06:15 alias_rpc -- scripts/common.sh@368 -- # return 0 00:08:04.523 18:06:15 alias_rpc -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:08:04.523 18:06:15 alias_rpc -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:08:04.523 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:04.523 --rc genhtml_branch_coverage=1 00:08:04.523 --rc genhtml_function_coverage=1 00:08:04.523 --rc genhtml_legend=1 00:08:04.523 --rc geninfo_all_blocks=1 00:08:04.523 --rc geninfo_unexecuted_blocks=1 00:08:04.523 00:08:04.523 ' 00:08:04.523 18:06:15 alias_rpc -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:08:04.523 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:04.523 --rc genhtml_branch_coverage=1 00:08:04.523 --rc genhtml_function_coverage=1 00:08:04.523 --rc genhtml_legend=1 00:08:04.523 --rc geninfo_all_blocks=1 00:08:04.523 --rc geninfo_unexecuted_blocks=1 00:08:04.523 00:08:04.523 ' 00:08:04.523 18:06:15 alias_rpc -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:08:04.523 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:04.523 --rc genhtml_branch_coverage=1 00:08:04.523 --rc genhtml_function_coverage=1 00:08:04.523 --rc genhtml_legend=1 00:08:04.523 --rc geninfo_all_blocks=1 00:08:04.523 --rc geninfo_unexecuted_blocks=1 00:08:04.523 00:08:04.523 ' 00:08:04.523 18:06:15 alias_rpc -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:08:04.523 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:04.523 --rc genhtml_branch_coverage=1 00:08:04.523 --rc genhtml_function_coverage=1 00:08:04.523 --rc genhtml_legend=1 00:08:04.523 --rc geninfo_all_blocks=1 00:08:04.523 --rc geninfo_unexecuted_blocks=1 00:08:04.523 00:08:04.523 ' 00:08:04.523 18:06:15 alias_rpc -- alias_rpc/alias_rpc.sh@10 -- # trap 'killprocess $spdk_tgt_pid; exit 1' ERR 00:08:04.523 18:06:15 alias_rpc -- alias_rpc/alias_rpc.sh@13 -- # spdk_tgt_pid=58666 00:08:04.523 18:06:15 alias_rpc -- alias_rpc/alias_rpc.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:08:04.523 18:06:15 alias_rpc -- alias_rpc/alias_rpc.sh@14 -- # waitforlisten 58666 00:08:04.523 18:06:15 alias_rpc -- common/autotest_common.sh@835 -- # '[' -z 58666 ']' 00:08:04.523 18:06:15 alias_rpc -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:08:04.523 18:06:15 alias_rpc -- common/autotest_common.sh@840 -- # local max_retries=100 00:08:04.523 18:06:15 alias_rpc -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:08:04.523 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:08:04.523 18:06:15 alias_rpc -- common/autotest_common.sh@844 -- # xtrace_disable 00:08:04.523 18:06:15 alias_rpc -- common/autotest_common.sh@10 -- # set +x 00:08:04.782 [2024-12-06 18:06:15.199318] Starting SPDK v25.01-pre git sha1 1148849d6 / DPDK 24.03.0 initialization... 00:08:04.782 [2024-12-06 18:06:15.199656] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid58666 ] 00:08:05.040 [2024-12-06 18:06:15.382107] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:05.040 [2024-12-06 18:06:15.498117] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:08:05.977 18:06:16 alias_rpc -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:08:05.977 18:06:16 alias_rpc -- common/autotest_common.sh@868 -- # return 0 00:08:05.977 18:06:16 alias_rpc -- alias_rpc/alias_rpc.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py load_config -i 00:08:06.236 18:06:16 alias_rpc -- alias_rpc/alias_rpc.sh@19 -- # killprocess 58666 00:08:06.236 18:06:16 alias_rpc -- common/autotest_common.sh@954 -- # '[' -z 58666 ']' 00:08:06.236 18:06:16 alias_rpc -- common/autotest_common.sh@958 -- # kill -0 58666 00:08:06.236 18:06:16 alias_rpc -- common/autotest_common.sh@959 -- # uname 00:08:06.236 18:06:16 alias_rpc -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:08:06.236 18:06:16 alias_rpc -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 58666 00:08:06.236 killing process with pid 58666 00:08:06.236 18:06:16 alias_rpc -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:08:06.237 18:06:16 alias_rpc -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:08:06.237 18:06:16 alias_rpc -- common/autotest_common.sh@972 -- # echo 'killing process with pid 58666' 00:08:06.237 18:06:16 alias_rpc -- common/autotest_common.sh@973 -- # kill 58666 00:08:06.237 18:06:16 alias_rpc -- common/autotest_common.sh@978 -- # wait 58666 00:08:08.768 ************************************ 00:08:08.768 END TEST alias_rpc 00:08:08.768 ************************************ 00:08:08.768 00:08:08.768 real 0m4.195s 00:08:08.768 user 0m4.163s 00:08:08.768 sys 0m0.590s 00:08:08.768 18:06:19 alias_rpc -- common/autotest_common.sh@1130 -- # xtrace_disable 00:08:08.768 18:06:19 alias_rpc -- common/autotest_common.sh@10 -- # set +x 00:08:08.768 18:06:19 -- spdk/autotest.sh@163 -- # [[ 0 -eq 0 ]] 00:08:08.768 18:06:19 -- spdk/autotest.sh@164 -- # run_test spdkcli_tcp /home/vagrant/spdk_repo/spdk/test/spdkcli/tcp.sh 00:08:08.768 18:06:19 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:08:08.768 18:06:19 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:08:08.768 18:06:19 -- common/autotest_common.sh@10 -- # set +x 00:08:08.768 ************************************ 00:08:08.768 START TEST spdkcli_tcp 00:08:08.768 ************************************ 00:08:08.768 18:06:19 spdkcli_tcp -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/spdkcli/tcp.sh 00:08:08.768 * Looking for test storage... 00:08:08.768 * Found test storage at /home/vagrant/spdk_repo/spdk/test/spdkcli 00:08:08.768 18:06:19 spdkcli_tcp -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:08:08.768 18:06:19 spdkcli_tcp -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:08:08.768 18:06:19 spdkcli_tcp -- common/autotest_common.sh@1711 -- # lcov --version 00:08:08.768 18:06:19 spdkcli_tcp -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:08:08.768 18:06:19 spdkcli_tcp -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:08:08.768 18:06:19 spdkcli_tcp -- scripts/common.sh@333 -- # local ver1 ver1_l 00:08:08.768 18:06:19 spdkcli_tcp -- scripts/common.sh@334 -- # local ver2 ver2_l 00:08:08.768 18:06:19 spdkcli_tcp -- scripts/common.sh@336 -- # IFS=.-: 00:08:08.768 18:06:19 spdkcli_tcp -- scripts/common.sh@336 -- # read -ra ver1 00:08:08.768 18:06:19 spdkcli_tcp -- scripts/common.sh@337 -- # IFS=.-: 00:08:08.768 18:06:19 spdkcli_tcp -- scripts/common.sh@337 -- # read -ra ver2 00:08:08.768 18:06:19 spdkcli_tcp -- scripts/common.sh@338 -- # local 'op=<' 00:08:08.768 18:06:19 spdkcli_tcp -- scripts/common.sh@340 -- # ver1_l=2 00:08:08.769 18:06:19 spdkcli_tcp -- scripts/common.sh@341 -- # ver2_l=1 00:08:08.769 18:06:19 spdkcli_tcp -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:08:08.769 18:06:19 spdkcli_tcp -- scripts/common.sh@344 -- # case "$op" in 00:08:08.769 18:06:19 spdkcli_tcp -- scripts/common.sh@345 -- # : 1 00:08:08.769 18:06:19 spdkcli_tcp -- scripts/common.sh@364 -- # (( v = 0 )) 00:08:08.769 18:06:19 spdkcli_tcp -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:08:08.769 18:06:19 spdkcli_tcp -- scripts/common.sh@365 -- # decimal 1 00:08:08.769 18:06:19 spdkcli_tcp -- scripts/common.sh@353 -- # local d=1 00:08:08.769 18:06:19 spdkcli_tcp -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:08:08.769 18:06:19 spdkcli_tcp -- scripts/common.sh@355 -- # echo 1 00:08:08.769 18:06:19 spdkcli_tcp -- scripts/common.sh@365 -- # ver1[v]=1 00:08:08.769 18:06:19 spdkcli_tcp -- scripts/common.sh@366 -- # decimal 2 00:08:08.769 18:06:19 spdkcli_tcp -- scripts/common.sh@353 -- # local d=2 00:08:08.769 18:06:19 spdkcli_tcp -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:08:08.769 18:06:19 spdkcli_tcp -- scripts/common.sh@355 -- # echo 2 00:08:08.769 18:06:19 spdkcli_tcp -- scripts/common.sh@366 -- # ver2[v]=2 00:08:08.769 18:06:19 spdkcli_tcp -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:08:08.769 18:06:19 spdkcli_tcp -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:08:08.769 18:06:19 spdkcli_tcp -- scripts/common.sh@368 -- # return 0 00:08:08.769 18:06:19 spdkcli_tcp -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:08:08.769 18:06:19 spdkcli_tcp -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:08:08.769 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:08.769 --rc genhtml_branch_coverage=1 00:08:08.769 --rc genhtml_function_coverage=1 00:08:08.769 --rc genhtml_legend=1 00:08:08.769 --rc geninfo_all_blocks=1 00:08:08.769 --rc geninfo_unexecuted_blocks=1 00:08:08.769 00:08:08.769 ' 00:08:08.769 18:06:19 spdkcli_tcp -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:08:08.769 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:08.769 --rc genhtml_branch_coverage=1 00:08:08.769 --rc genhtml_function_coverage=1 00:08:08.769 --rc genhtml_legend=1 00:08:08.769 --rc geninfo_all_blocks=1 00:08:08.769 --rc geninfo_unexecuted_blocks=1 00:08:08.769 00:08:08.769 ' 00:08:08.769 18:06:19 spdkcli_tcp -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:08:08.769 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:08.769 --rc genhtml_branch_coverage=1 00:08:08.769 --rc genhtml_function_coverage=1 00:08:08.769 --rc genhtml_legend=1 00:08:08.769 --rc geninfo_all_blocks=1 00:08:08.769 --rc geninfo_unexecuted_blocks=1 00:08:08.769 00:08:08.769 ' 00:08:08.769 18:06:19 spdkcli_tcp -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:08:08.769 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:08.769 --rc genhtml_branch_coverage=1 00:08:08.769 --rc genhtml_function_coverage=1 00:08:08.769 --rc genhtml_legend=1 00:08:08.769 --rc geninfo_all_blocks=1 00:08:08.769 --rc geninfo_unexecuted_blocks=1 00:08:08.769 00:08:08.769 ' 00:08:08.769 18:06:19 spdkcli_tcp -- spdkcli/tcp.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/spdkcli/common.sh 00:08:08.769 18:06:19 spdkcli_tcp -- spdkcli/common.sh@6 -- # spdkcli_job=/home/vagrant/spdk_repo/spdk/test/spdkcli/spdkcli_job.py 00:08:08.769 18:06:19 spdkcli_tcp -- spdkcli/common.sh@7 -- # spdk_clear_config_py=/home/vagrant/spdk_repo/spdk/test/json_config/clear_config.py 00:08:08.769 18:06:19 spdkcli_tcp -- spdkcli/tcp.sh@18 -- # IP_ADDRESS=127.0.0.1 00:08:08.769 18:06:19 spdkcli_tcp -- spdkcli/tcp.sh@19 -- # PORT=9998 00:08:08.769 18:06:19 spdkcli_tcp -- spdkcli/tcp.sh@21 -- # trap 'err_cleanup; exit 1' SIGINT SIGTERM EXIT 00:08:08.769 18:06:19 spdkcli_tcp -- spdkcli/tcp.sh@23 -- # timing_enter run_spdk_tgt_tcp 00:08:08.769 18:06:19 spdkcli_tcp -- common/autotest_common.sh@726 -- # xtrace_disable 00:08:08.769 18:06:19 spdkcli_tcp -- common/autotest_common.sh@10 -- # set +x 00:08:08.769 18:06:19 spdkcli_tcp -- spdkcli/tcp.sh@25 -- # spdk_tgt_pid=58773 00:08:08.769 18:06:19 spdkcli_tcp -- spdkcli/tcp.sh@27 -- # waitforlisten 58773 00:08:08.769 18:06:19 spdkcli_tcp -- spdkcli/tcp.sh@24 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x3 -p 0 00:08:08.769 18:06:19 spdkcli_tcp -- common/autotest_common.sh@835 -- # '[' -z 58773 ']' 00:08:08.769 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:08:08.769 18:06:19 spdkcli_tcp -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:08:08.769 18:06:19 spdkcli_tcp -- common/autotest_common.sh@840 -- # local max_retries=100 00:08:08.769 18:06:19 spdkcli_tcp -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:08:08.769 18:06:19 spdkcli_tcp -- common/autotest_common.sh@844 -- # xtrace_disable 00:08:08.769 18:06:19 spdkcli_tcp -- common/autotest_common.sh@10 -- # set +x 00:08:09.028 [2024-12-06 18:06:19.410722] Starting SPDK v25.01-pre git sha1 1148849d6 / DPDK 24.03.0 initialization... 00:08:09.028 [2024-12-06 18:06:19.410873] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid58773 ] 00:08:09.028 [2024-12-06 18:06:19.583947] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:08:09.287 [2024-12-06 18:06:19.703050] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:08:09.287 [2024-12-06 18:06:19.703087] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:08:10.221 18:06:20 spdkcli_tcp -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:08:10.221 18:06:20 spdkcli_tcp -- common/autotest_common.sh@868 -- # return 0 00:08:10.221 18:06:20 spdkcli_tcp -- spdkcli/tcp.sh@31 -- # socat_pid=58794 00:08:10.221 18:06:20 spdkcli_tcp -- spdkcli/tcp.sh@30 -- # socat TCP-LISTEN:9998 UNIX-CONNECT:/var/tmp/spdk.sock 00:08:10.221 18:06:20 spdkcli_tcp -- spdkcli/tcp.sh@33 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -r 100 -t 2 -s 127.0.0.1 -p 9998 rpc_get_methods 00:08:10.478 [ 00:08:10.478 "bdev_malloc_delete", 00:08:10.478 "bdev_malloc_create", 00:08:10.478 "bdev_null_resize", 00:08:10.478 "bdev_null_delete", 00:08:10.478 "bdev_null_create", 00:08:10.478 "bdev_nvme_cuse_unregister", 00:08:10.478 "bdev_nvme_cuse_register", 00:08:10.478 "bdev_opal_new_user", 00:08:10.478 "bdev_opal_set_lock_state", 00:08:10.478 "bdev_opal_delete", 00:08:10.478 "bdev_opal_get_info", 00:08:10.478 "bdev_opal_create", 00:08:10.478 "bdev_nvme_opal_revert", 00:08:10.478 "bdev_nvme_opal_init", 00:08:10.478 "bdev_nvme_send_cmd", 00:08:10.478 "bdev_nvme_set_keys", 00:08:10.478 "bdev_nvme_get_path_iostat", 00:08:10.478 "bdev_nvme_get_mdns_discovery_info", 00:08:10.478 "bdev_nvme_stop_mdns_discovery", 00:08:10.478 "bdev_nvme_start_mdns_discovery", 00:08:10.478 "bdev_nvme_set_multipath_policy", 00:08:10.478 "bdev_nvme_set_preferred_path", 00:08:10.478 "bdev_nvme_get_io_paths", 00:08:10.478 "bdev_nvme_remove_error_injection", 00:08:10.478 "bdev_nvme_add_error_injection", 00:08:10.478 "bdev_nvme_get_discovery_info", 00:08:10.478 "bdev_nvme_stop_discovery", 00:08:10.478 "bdev_nvme_start_discovery", 00:08:10.478 "bdev_nvme_get_controller_health_info", 00:08:10.478 "bdev_nvme_disable_controller", 00:08:10.478 "bdev_nvme_enable_controller", 00:08:10.478 "bdev_nvme_reset_controller", 00:08:10.478 "bdev_nvme_get_transport_statistics", 00:08:10.478 "bdev_nvme_apply_firmware", 00:08:10.478 "bdev_nvme_detach_controller", 00:08:10.478 "bdev_nvme_get_controllers", 00:08:10.478 "bdev_nvme_attach_controller", 00:08:10.478 "bdev_nvme_set_hotplug", 00:08:10.478 "bdev_nvme_set_options", 00:08:10.478 "bdev_passthru_delete", 00:08:10.478 "bdev_passthru_create", 00:08:10.478 "bdev_lvol_set_parent_bdev", 00:08:10.478 "bdev_lvol_set_parent", 00:08:10.478 "bdev_lvol_check_shallow_copy", 00:08:10.478 "bdev_lvol_start_shallow_copy", 00:08:10.478 "bdev_lvol_grow_lvstore", 00:08:10.478 "bdev_lvol_get_lvols", 00:08:10.478 "bdev_lvol_get_lvstores", 00:08:10.478 "bdev_lvol_delete", 00:08:10.478 "bdev_lvol_set_read_only", 00:08:10.478 "bdev_lvol_resize", 00:08:10.478 "bdev_lvol_decouple_parent", 00:08:10.478 "bdev_lvol_inflate", 00:08:10.478 "bdev_lvol_rename", 00:08:10.478 "bdev_lvol_clone_bdev", 00:08:10.478 "bdev_lvol_clone", 00:08:10.478 "bdev_lvol_snapshot", 00:08:10.478 "bdev_lvol_create", 00:08:10.478 "bdev_lvol_delete_lvstore", 00:08:10.478 "bdev_lvol_rename_lvstore", 00:08:10.478 "bdev_lvol_create_lvstore", 00:08:10.478 "bdev_raid_set_options", 00:08:10.478 "bdev_raid_remove_base_bdev", 00:08:10.479 "bdev_raid_add_base_bdev", 00:08:10.479 "bdev_raid_delete", 00:08:10.479 "bdev_raid_create", 00:08:10.479 "bdev_raid_get_bdevs", 00:08:10.479 "bdev_error_inject_error", 00:08:10.479 "bdev_error_delete", 00:08:10.479 "bdev_error_create", 00:08:10.479 "bdev_split_delete", 00:08:10.479 "bdev_split_create", 00:08:10.479 "bdev_delay_delete", 00:08:10.479 "bdev_delay_create", 00:08:10.479 "bdev_delay_update_latency", 00:08:10.479 "bdev_zone_block_delete", 00:08:10.479 "bdev_zone_block_create", 00:08:10.479 "blobfs_create", 00:08:10.479 "blobfs_detect", 00:08:10.479 "blobfs_set_cache_size", 00:08:10.479 "bdev_xnvme_delete", 00:08:10.479 "bdev_xnvme_create", 00:08:10.479 "bdev_aio_delete", 00:08:10.479 "bdev_aio_rescan", 00:08:10.479 "bdev_aio_create", 00:08:10.479 "bdev_ftl_set_property", 00:08:10.479 "bdev_ftl_get_properties", 00:08:10.479 "bdev_ftl_get_stats", 00:08:10.479 "bdev_ftl_unmap", 00:08:10.479 "bdev_ftl_unload", 00:08:10.479 "bdev_ftl_delete", 00:08:10.479 "bdev_ftl_load", 00:08:10.479 "bdev_ftl_create", 00:08:10.479 "bdev_virtio_attach_controller", 00:08:10.479 "bdev_virtio_scsi_get_devices", 00:08:10.479 "bdev_virtio_detach_controller", 00:08:10.479 "bdev_virtio_blk_set_hotplug", 00:08:10.479 "bdev_iscsi_delete", 00:08:10.479 "bdev_iscsi_create", 00:08:10.479 "bdev_iscsi_set_options", 00:08:10.479 "accel_error_inject_error", 00:08:10.479 "ioat_scan_accel_module", 00:08:10.479 "dsa_scan_accel_module", 00:08:10.479 "iaa_scan_accel_module", 00:08:10.479 "keyring_file_remove_key", 00:08:10.479 "keyring_file_add_key", 00:08:10.479 "keyring_linux_set_options", 00:08:10.479 "fsdev_aio_delete", 00:08:10.479 "fsdev_aio_create", 00:08:10.479 "iscsi_get_histogram", 00:08:10.479 "iscsi_enable_histogram", 00:08:10.479 "iscsi_set_options", 00:08:10.479 "iscsi_get_auth_groups", 00:08:10.479 "iscsi_auth_group_remove_secret", 00:08:10.479 "iscsi_auth_group_add_secret", 00:08:10.479 "iscsi_delete_auth_group", 00:08:10.479 "iscsi_create_auth_group", 00:08:10.479 "iscsi_set_discovery_auth", 00:08:10.479 "iscsi_get_options", 00:08:10.479 "iscsi_target_node_request_logout", 00:08:10.479 "iscsi_target_node_set_redirect", 00:08:10.479 "iscsi_target_node_set_auth", 00:08:10.479 "iscsi_target_node_add_lun", 00:08:10.479 "iscsi_get_stats", 00:08:10.479 "iscsi_get_connections", 00:08:10.479 "iscsi_portal_group_set_auth", 00:08:10.479 "iscsi_start_portal_group", 00:08:10.479 "iscsi_delete_portal_group", 00:08:10.479 "iscsi_create_portal_group", 00:08:10.479 "iscsi_get_portal_groups", 00:08:10.479 "iscsi_delete_target_node", 00:08:10.479 "iscsi_target_node_remove_pg_ig_maps", 00:08:10.479 "iscsi_target_node_add_pg_ig_maps", 00:08:10.479 "iscsi_create_target_node", 00:08:10.479 "iscsi_get_target_nodes", 00:08:10.479 "iscsi_delete_initiator_group", 00:08:10.479 "iscsi_initiator_group_remove_initiators", 00:08:10.479 "iscsi_initiator_group_add_initiators", 00:08:10.479 "iscsi_create_initiator_group", 00:08:10.479 "iscsi_get_initiator_groups", 00:08:10.479 "nvmf_set_crdt", 00:08:10.479 "nvmf_set_config", 00:08:10.479 "nvmf_set_max_subsystems", 00:08:10.479 "nvmf_stop_mdns_prr", 00:08:10.479 "nvmf_publish_mdns_prr", 00:08:10.479 "nvmf_subsystem_get_listeners", 00:08:10.479 "nvmf_subsystem_get_qpairs", 00:08:10.479 "nvmf_subsystem_get_controllers", 00:08:10.479 "nvmf_get_stats", 00:08:10.479 "nvmf_get_transports", 00:08:10.479 "nvmf_create_transport", 00:08:10.479 "nvmf_get_targets", 00:08:10.479 "nvmf_delete_target", 00:08:10.479 "nvmf_create_target", 00:08:10.479 "nvmf_subsystem_allow_any_host", 00:08:10.479 "nvmf_subsystem_set_keys", 00:08:10.479 "nvmf_subsystem_remove_host", 00:08:10.479 "nvmf_subsystem_add_host", 00:08:10.479 "nvmf_ns_remove_host", 00:08:10.479 "nvmf_ns_add_host", 00:08:10.479 "nvmf_subsystem_remove_ns", 00:08:10.479 "nvmf_subsystem_set_ns_ana_group", 00:08:10.479 "nvmf_subsystem_add_ns", 00:08:10.479 "nvmf_subsystem_listener_set_ana_state", 00:08:10.479 "nvmf_discovery_get_referrals", 00:08:10.479 "nvmf_discovery_remove_referral", 00:08:10.479 "nvmf_discovery_add_referral", 00:08:10.479 "nvmf_subsystem_remove_listener", 00:08:10.479 "nvmf_subsystem_add_listener", 00:08:10.479 "nvmf_delete_subsystem", 00:08:10.479 "nvmf_create_subsystem", 00:08:10.479 "nvmf_get_subsystems", 00:08:10.479 "env_dpdk_get_mem_stats", 00:08:10.479 "nbd_get_disks", 00:08:10.479 "nbd_stop_disk", 00:08:10.479 "nbd_start_disk", 00:08:10.479 "ublk_recover_disk", 00:08:10.479 "ublk_get_disks", 00:08:10.479 "ublk_stop_disk", 00:08:10.479 "ublk_start_disk", 00:08:10.479 "ublk_destroy_target", 00:08:10.479 "ublk_create_target", 00:08:10.479 "virtio_blk_create_transport", 00:08:10.479 "virtio_blk_get_transports", 00:08:10.479 "vhost_controller_set_coalescing", 00:08:10.479 "vhost_get_controllers", 00:08:10.479 "vhost_delete_controller", 00:08:10.479 "vhost_create_blk_controller", 00:08:10.479 "vhost_scsi_controller_remove_target", 00:08:10.479 "vhost_scsi_controller_add_target", 00:08:10.479 "vhost_start_scsi_controller", 00:08:10.479 "vhost_create_scsi_controller", 00:08:10.479 "thread_set_cpumask", 00:08:10.479 "scheduler_set_options", 00:08:10.479 "framework_get_governor", 00:08:10.479 "framework_get_scheduler", 00:08:10.479 "framework_set_scheduler", 00:08:10.479 "framework_get_reactors", 00:08:10.479 "thread_get_io_channels", 00:08:10.479 "thread_get_pollers", 00:08:10.479 "thread_get_stats", 00:08:10.479 "framework_monitor_context_switch", 00:08:10.479 "spdk_kill_instance", 00:08:10.479 "log_enable_timestamps", 00:08:10.479 "log_get_flags", 00:08:10.479 "log_clear_flag", 00:08:10.479 "log_set_flag", 00:08:10.479 "log_get_level", 00:08:10.479 "log_set_level", 00:08:10.479 "log_get_print_level", 00:08:10.479 "log_set_print_level", 00:08:10.479 "framework_enable_cpumask_locks", 00:08:10.479 "framework_disable_cpumask_locks", 00:08:10.479 "framework_wait_init", 00:08:10.479 "framework_start_init", 00:08:10.479 "scsi_get_devices", 00:08:10.479 "bdev_get_histogram", 00:08:10.479 "bdev_enable_histogram", 00:08:10.479 "bdev_set_qos_limit", 00:08:10.479 "bdev_set_qd_sampling_period", 00:08:10.479 "bdev_get_bdevs", 00:08:10.479 "bdev_reset_iostat", 00:08:10.479 "bdev_get_iostat", 00:08:10.479 "bdev_examine", 00:08:10.479 "bdev_wait_for_examine", 00:08:10.479 "bdev_set_options", 00:08:10.479 "accel_get_stats", 00:08:10.479 "accel_set_options", 00:08:10.479 "accel_set_driver", 00:08:10.479 "accel_crypto_key_destroy", 00:08:10.479 "accel_crypto_keys_get", 00:08:10.479 "accel_crypto_key_create", 00:08:10.479 "accel_assign_opc", 00:08:10.479 "accel_get_module_info", 00:08:10.479 "accel_get_opc_assignments", 00:08:10.479 "vmd_rescan", 00:08:10.479 "vmd_remove_device", 00:08:10.479 "vmd_enable", 00:08:10.479 "sock_get_default_impl", 00:08:10.479 "sock_set_default_impl", 00:08:10.479 "sock_impl_set_options", 00:08:10.479 "sock_impl_get_options", 00:08:10.479 "iobuf_get_stats", 00:08:10.479 "iobuf_set_options", 00:08:10.479 "keyring_get_keys", 00:08:10.479 "framework_get_pci_devices", 00:08:10.479 "framework_get_config", 00:08:10.479 "framework_get_subsystems", 00:08:10.479 "fsdev_set_opts", 00:08:10.479 "fsdev_get_opts", 00:08:10.479 "trace_get_info", 00:08:10.479 "trace_get_tpoint_group_mask", 00:08:10.479 "trace_disable_tpoint_group", 00:08:10.479 "trace_enable_tpoint_group", 00:08:10.479 "trace_clear_tpoint_mask", 00:08:10.479 "trace_set_tpoint_mask", 00:08:10.479 "notify_get_notifications", 00:08:10.479 "notify_get_types", 00:08:10.479 "spdk_get_version", 00:08:10.479 "rpc_get_methods" 00:08:10.479 ] 00:08:10.479 18:06:20 spdkcli_tcp -- spdkcli/tcp.sh@35 -- # timing_exit run_spdk_tgt_tcp 00:08:10.479 18:06:20 spdkcli_tcp -- common/autotest_common.sh@732 -- # xtrace_disable 00:08:10.479 18:06:20 spdkcli_tcp -- common/autotest_common.sh@10 -- # set +x 00:08:10.479 18:06:20 spdkcli_tcp -- spdkcli/tcp.sh@37 -- # trap - SIGINT SIGTERM EXIT 00:08:10.479 18:06:20 spdkcli_tcp -- spdkcli/tcp.sh@38 -- # killprocess 58773 00:08:10.479 18:06:20 spdkcli_tcp -- common/autotest_common.sh@954 -- # '[' -z 58773 ']' 00:08:10.479 18:06:20 spdkcli_tcp -- common/autotest_common.sh@958 -- # kill -0 58773 00:08:10.479 18:06:20 spdkcli_tcp -- common/autotest_common.sh@959 -- # uname 00:08:10.479 18:06:20 spdkcli_tcp -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:08:10.479 18:06:20 spdkcli_tcp -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 58773 00:08:10.479 18:06:20 spdkcli_tcp -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:08:10.479 18:06:20 spdkcli_tcp -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:08:10.479 killing process with pid 58773 00:08:10.479 18:06:20 spdkcli_tcp -- common/autotest_common.sh@972 -- # echo 'killing process with pid 58773' 00:08:10.479 18:06:20 spdkcli_tcp -- common/autotest_common.sh@973 -- # kill 58773 00:08:10.479 18:06:20 spdkcli_tcp -- common/autotest_common.sh@978 -- # wait 58773 00:08:13.026 00:08:13.026 real 0m4.259s 00:08:13.026 user 0m7.666s 00:08:13.026 sys 0m0.656s 00:08:13.026 18:06:23 spdkcli_tcp -- common/autotest_common.sh@1130 -- # xtrace_disable 00:08:13.026 18:06:23 spdkcli_tcp -- common/autotest_common.sh@10 -- # set +x 00:08:13.026 ************************************ 00:08:13.026 END TEST spdkcli_tcp 00:08:13.026 ************************************ 00:08:13.026 18:06:23 -- spdk/autotest.sh@167 -- # run_test dpdk_mem_utility /home/vagrant/spdk_repo/spdk/test/dpdk_memory_utility/test_dpdk_mem_info.sh 00:08:13.026 18:06:23 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:08:13.026 18:06:23 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:08:13.026 18:06:23 -- common/autotest_common.sh@10 -- # set +x 00:08:13.026 ************************************ 00:08:13.026 START TEST dpdk_mem_utility 00:08:13.026 ************************************ 00:08:13.026 18:06:23 dpdk_mem_utility -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/dpdk_memory_utility/test_dpdk_mem_info.sh 00:08:13.026 * Looking for test storage... 00:08:13.026 * Found test storage at /home/vagrant/spdk_repo/spdk/test/dpdk_memory_utility 00:08:13.026 18:06:23 dpdk_mem_utility -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:08:13.026 18:06:23 dpdk_mem_utility -- common/autotest_common.sh@1711 -- # lcov --version 00:08:13.026 18:06:23 dpdk_mem_utility -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:08:13.284 18:06:23 dpdk_mem_utility -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:08:13.284 18:06:23 dpdk_mem_utility -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:08:13.284 18:06:23 dpdk_mem_utility -- scripts/common.sh@333 -- # local ver1 ver1_l 00:08:13.284 18:06:23 dpdk_mem_utility -- scripts/common.sh@334 -- # local ver2 ver2_l 00:08:13.284 18:06:23 dpdk_mem_utility -- scripts/common.sh@336 -- # IFS=.-: 00:08:13.284 18:06:23 dpdk_mem_utility -- scripts/common.sh@336 -- # read -ra ver1 00:08:13.284 18:06:23 dpdk_mem_utility -- scripts/common.sh@337 -- # IFS=.-: 00:08:13.284 18:06:23 dpdk_mem_utility -- scripts/common.sh@337 -- # read -ra ver2 00:08:13.284 18:06:23 dpdk_mem_utility -- scripts/common.sh@338 -- # local 'op=<' 00:08:13.284 18:06:23 dpdk_mem_utility -- scripts/common.sh@340 -- # ver1_l=2 00:08:13.284 18:06:23 dpdk_mem_utility -- scripts/common.sh@341 -- # ver2_l=1 00:08:13.284 18:06:23 dpdk_mem_utility -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:08:13.284 18:06:23 dpdk_mem_utility -- scripts/common.sh@344 -- # case "$op" in 00:08:13.284 18:06:23 dpdk_mem_utility -- scripts/common.sh@345 -- # : 1 00:08:13.284 18:06:23 dpdk_mem_utility -- scripts/common.sh@364 -- # (( v = 0 )) 00:08:13.284 18:06:23 dpdk_mem_utility -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:08:13.285 18:06:23 dpdk_mem_utility -- scripts/common.sh@365 -- # decimal 1 00:08:13.285 18:06:23 dpdk_mem_utility -- scripts/common.sh@353 -- # local d=1 00:08:13.285 18:06:23 dpdk_mem_utility -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:08:13.285 18:06:23 dpdk_mem_utility -- scripts/common.sh@355 -- # echo 1 00:08:13.285 18:06:23 dpdk_mem_utility -- scripts/common.sh@365 -- # ver1[v]=1 00:08:13.285 18:06:23 dpdk_mem_utility -- scripts/common.sh@366 -- # decimal 2 00:08:13.285 18:06:23 dpdk_mem_utility -- scripts/common.sh@353 -- # local d=2 00:08:13.285 18:06:23 dpdk_mem_utility -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:08:13.285 18:06:23 dpdk_mem_utility -- scripts/common.sh@355 -- # echo 2 00:08:13.285 18:06:23 dpdk_mem_utility -- scripts/common.sh@366 -- # ver2[v]=2 00:08:13.285 18:06:23 dpdk_mem_utility -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:08:13.285 18:06:23 dpdk_mem_utility -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:08:13.285 18:06:23 dpdk_mem_utility -- scripts/common.sh@368 -- # return 0 00:08:13.285 18:06:23 dpdk_mem_utility -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:08:13.285 18:06:23 dpdk_mem_utility -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:08:13.285 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:13.285 --rc genhtml_branch_coverage=1 00:08:13.285 --rc genhtml_function_coverage=1 00:08:13.285 --rc genhtml_legend=1 00:08:13.285 --rc geninfo_all_blocks=1 00:08:13.285 --rc geninfo_unexecuted_blocks=1 00:08:13.285 00:08:13.285 ' 00:08:13.285 18:06:23 dpdk_mem_utility -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:08:13.285 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:13.285 --rc genhtml_branch_coverage=1 00:08:13.285 --rc genhtml_function_coverage=1 00:08:13.285 --rc genhtml_legend=1 00:08:13.285 --rc geninfo_all_blocks=1 00:08:13.285 --rc geninfo_unexecuted_blocks=1 00:08:13.285 00:08:13.285 ' 00:08:13.285 18:06:23 dpdk_mem_utility -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:08:13.285 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:13.285 --rc genhtml_branch_coverage=1 00:08:13.285 --rc genhtml_function_coverage=1 00:08:13.285 --rc genhtml_legend=1 00:08:13.285 --rc geninfo_all_blocks=1 00:08:13.285 --rc geninfo_unexecuted_blocks=1 00:08:13.285 00:08:13.285 ' 00:08:13.285 18:06:23 dpdk_mem_utility -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:08:13.285 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:13.285 --rc genhtml_branch_coverage=1 00:08:13.285 --rc genhtml_function_coverage=1 00:08:13.285 --rc genhtml_legend=1 00:08:13.285 --rc geninfo_all_blocks=1 00:08:13.285 --rc geninfo_unexecuted_blocks=1 00:08:13.285 00:08:13.285 ' 00:08:13.285 18:06:23 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@10 -- # MEM_SCRIPT=/home/vagrant/spdk_repo/spdk/scripts/dpdk_mem_info.py 00:08:13.285 18:06:23 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@13 -- # spdkpid=58895 00:08:13.285 18:06:23 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:08:13.285 18:06:23 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@15 -- # waitforlisten 58895 00:08:13.285 18:06:23 dpdk_mem_utility -- common/autotest_common.sh@835 -- # '[' -z 58895 ']' 00:08:13.285 18:06:23 dpdk_mem_utility -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:08:13.285 18:06:23 dpdk_mem_utility -- common/autotest_common.sh@840 -- # local max_retries=100 00:08:13.285 18:06:23 dpdk_mem_utility -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:08:13.285 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:08:13.285 18:06:23 dpdk_mem_utility -- common/autotest_common.sh@844 -- # xtrace_disable 00:08:13.285 18:06:23 dpdk_mem_utility -- common/autotest_common.sh@10 -- # set +x 00:08:13.285 [2024-12-06 18:06:23.771913] Starting SPDK v25.01-pre git sha1 1148849d6 / DPDK 24.03.0 initialization... 00:08:13.285 [2024-12-06 18:06:23.772192] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid58895 ] 00:08:13.542 [2024-12-06 18:06:23.953775] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:13.542 [2024-12-06 18:06:24.071088] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:08:14.480 18:06:24 dpdk_mem_utility -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:08:14.480 18:06:24 dpdk_mem_utility -- common/autotest_common.sh@868 -- # return 0 00:08:14.480 18:06:24 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@17 -- # trap 'killprocess $spdkpid' SIGINT SIGTERM EXIT 00:08:14.480 18:06:24 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@19 -- # rpc_cmd env_dpdk_get_mem_stats 00:08:14.480 18:06:24 dpdk_mem_utility -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:14.480 18:06:24 dpdk_mem_utility -- common/autotest_common.sh@10 -- # set +x 00:08:14.480 { 00:08:14.480 "filename": "/tmp/spdk_mem_dump.txt" 00:08:14.480 } 00:08:14.480 18:06:24 dpdk_mem_utility -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:14.480 18:06:24 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@21 -- # /home/vagrant/spdk_repo/spdk/scripts/dpdk_mem_info.py 00:08:14.480 DPDK memory size 824.000000 MiB in 1 heap(s) 00:08:14.480 1 heaps totaling size 824.000000 MiB 00:08:14.480 size: 824.000000 MiB heap id: 0 00:08:14.480 end heaps---------- 00:08:14.480 9 mempools totaling size 603.782043 MiB 00:08:14.480 size: 212.674988 MiB name: PDU_immediate_data_Pool 00:08:14.480 size: 158.602051 MiB name: PDU_data_out_Pool 00:08:14.480 size: 100.555481 MiB name: bdev_io_58895 00:08:14.480 size: 50.003479 MiB name: msgpool_58895 00:08:14.480 size: 36.509338 MiB name: fsdev_io_58895 00:08:14.480 size: 21.763794 MiB name: PDU_Pool 00:08:14.480 size: 19.513306 MiB name: SCSI_TASK_Pool 00:08:14.480 size: 4.133484 MiB name: evtpool_58895 00:08:14.480 size: 0.026123 MiB name: Session_Pool 00:08:14.480 end mempools------- 00:08:14.480 6 memzones totaling size 4.142822 MiB 00:08:14.480 size: 1.000366 MiB name: RG_ring_0_58895 00:08:14.480 size: 1.000366 MiB name: RG_ring_1_58895 00:08:14.480 size: 1.000366 MiB name: RG_ring_4_58895 00:08:14.480 size: 1.000366 MiB name: RG_ring_5_58895 00:08:14.480 size: 0.125366 MiB name: RG_ring_2_58895 00:08:14.480 size: 0.015991 MiB name: RG_ring_3_58895 00:08:14.480 end memzones------- 00:08:14.746 18:06:25 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@23 -- # /home/vagrant/spdk_repo/spdk/scripts/dpdk_mem_info.py -m 0 00:08:14.746 heap id: 0 total size: 824.000000 MiB number of busy elements: 320 number of free elements: 18 00:08:14.746 list of free elements. size: 16.780151 MiB 00:08:14.746 element at address: 0x200006400000 with size: 1.995972 MiB 00:08:14.746 element at address: 0x20000a600000 with size: 1.995972 MiB 00:08:14.746 element at address: 0x200003e00000 with size: 1.991028 MiB 00:08:14.746 element at address: 0x200019500040 with size: 0.999939 MiB 00:08:14.746 element at address: 0x200019900040 with size: 0.999939 MiB 00:08:14.746 element at address: 0x200019a00000 with size: 0.999084 MiB 00:08:14.746 element at address: 0x200032600000 with size: 0.994324 MiB 00:08:14.746 element at address: 0x200000400000 with size: 0.992004 MiB 00:08:14.746 element at address: 0x200019200000 with size: 0.959656 MiB 00:08:14.746 element at address: 0x200019d00040 with size: 0.936401 MiB 00:08:14.746 element at address: 0x200000200000 with size: 0.716980 MiB 00:08:14.746 element at address: 0x20001b400000 with size: 0.561462 MiB 00:08:14.746 element at address: 0x200000c00000 with size: 0.489197 MiB 00:08:14.746 element at address: 0x200019600000 with size: 0.488220 MiB 00:08:14.746 element at address: 0x200019e00000 with size: 0.485413 MiB 00:08:14.746 element at address: 0x200012c00000 with size: 0.433228 MiB 00:08:14.746 element at address: 0x200028800000 with size: 0.390442 MiB 00:08:14.746 element at address: 0x200000800000 with size: 0.350891 MiB 00:08:14.746 list of standard malloc elements. size: 199.288940 MiB 00:08:14.746 element at address: 0x20000a7fef80 with size: 132.000183 MiB 00:08:14.746 element at address: 0x2000065fef80 with size: 64.000183 MiB 00:08:14.746 element at address: 0x2000193fff80 with size: 1.000183 MiB 00:08:14.746 element at address: 0x2000197fff80 with size: 1.000183 MiB 00:08:14.746 element at address: 0x200019bfff80 with size: 1.000183 MiB 00:08:14.746 element at address: 0x2000003d9e80 with size: 0.140808 MiB 00:08:14.746 element at address: 0x200019deff40 with size: 0.062683 MiB 00:08:14.746 element at address: 0x2000003fdf40 with size: 0.007996 MiB 00:08:14.746 element at address: 0x20000a5ff040 with size: 0.000427 MiB 00:08:14.746 element at address: 0x200019defdc0 with size: 0.000366 MiB 00:08:14.746 element at address: 0x200012bff040 with size: 0.000305 MiB 00:08:14.746 element at address: 0x2000002d7b00 with size: 0.000244 MiB 00:08:14.746 element at address: 0x2000003d9d80 with size: 0.000244 MiB 00:08:14.746 element at address: 0x2000004fdf40 with size: 0.000244 MiB 00:08:14.746 element at address: 0x2000004fe040 with size: 0.000244 MiB 00:08:14.746 element at address: 0x2000004fe140 with size: 0.000244 MiB 00:08:14.746 element at address: 0x2000004fe240 with size: 0.000244 MiB 00:08:14.746 element at address: 0x2000004fe340 with size: 0.000244 MiB 00:08:14.746 element at address: 0x2000004fe440 with size: 0.000244 MiB 00:08:14.746 element at address: 0x2000004fe540 with size: 0.000244 MiB 00:08:14.746 element at address: 0x2000004fe640 with size: 0.000244 MiB 00:08:14.746 element at address: 0x2000004fe740 with size: 0.000244 MiB 00:08:14.746 element at address: 0x2000004fe840 with size: 0.000244 MiB 00:08:14.746 element at address: 0x2000004fe940 with size: 0.000244 MiB 00:08:14.746 element at address: 0x2000004fea40 with size: 0.000244 MiB 00:08:14.746 element at address: 0x2000004feb40 with size: 0.000244 MiB 00:08:14.746 element at address: 0x2000004fec40 with size: 0.000244 MiB 00:08:14.746 element at address: 0x2000004fed40 with size: 0.000244 MiB 00:08:14.746 element at address: 0x2000004fee40 with size: 0.000244 MiB 00:08:14.746 element at address: 0x2000004fef40 with size: 0.000244 MiB 00:08:14.746 element at address: 0x2000004ff040 with size: 0.000244 MiB 00:08:14.746 element at address: 0x2000004ff140 with size: 0.000244 MiB 00:08:14.746 element at address: 0x2000004ff240 with size: 0.000244 MiB 00:08:14.746 element at address: 0x2000004ff340 with size: 0.000244 MiB 00:08:14.746 element at address: 0x2000004ff440 with size: 0.000244 MiB 00:08:14.746 element at address: 0x2000004ff540 with size: 0.000244 MiB 00:08:14.746 element at address: 0x2000004ff640 with size: 0.000244 MiB 00:08:14.746 element at address: 0x2000004ff740 with size: 0.000244 MiB 00:08:14.746 element at address: 0x2000004ff840 with size: 0.000244 MiB 00:08:14.746 element at address: 0x2000004ff940 with size: 0.000244 MiB 00:08:14.746 element at address: 0x2000004ffbc0 with size: 0.000244 MiB 00:08:14.746 element at address: 0x2000004ffcc0 with size: 0.000244 MiB 00:08:14.746 element at address: 0x2000004ffdc0 with size: 0.000244 MiB 00:08:14.746 element at address: 0x20000087e1c0 with size: 0.000244 MiB 00:08:14.746 element at address: 0x20000087e2c0 with size: 0.000244 MiB 00:08:14.746 element at address: 0x20000087e3c0 with size: 0.000244 MiB 00:08:14.746 element at address: 0x20000087e4c0 with size: 0.000244 MiB 00:08:14.746 element at address: 0x20000087e5c0 with size: 0.000244 MiB 00:08:14.746 element at address: 0x20000087e6c0 with size: 0.000244 MiB 00:08:14.746 element at address: 0x20000087e7c0 with size: 0.000244 MiB 00:08:14.746 element at address: 0x20000087e8c0 with size: 0.000244 MiB 00:08:14.746 element at address: 0x20000087e9c0 with size: 0.000244 MiB 00:08:14.746 element at address: 0x20000087eac0 with size: 0.000244 MiB 00:08:14.746 element at address: 0x20000087ebc0 with size: 0.000244 MiB 00:08:14.746 element at address: 0x20000087ecc0 with size: 0.000244 MiB 00:08:14.746 element at address: 0x20000087edc0 with size: 0.000244 MiB 00:08:14.746 element at address: 0x20000087eec0 with size: 0.000244 MiB 00:08:14.746 element at address: 0x20000087efc0 with size: 0.000244 MiB 00:08:14.746 element at address: 0x20000087f0c0 with size: 0.000244 MiB 00:08:14.746 element at address: 0x20000087f1c0 with size: 0.000244 MiB 00:08:14.746 element at address: 0x20000087f2c0 with size: 0.000244 MiB 00:08:14.746 element at address: 0x20000087f3c0 with size: 0.000244 MiB 00:08:14.746 element at address: 0x20000087f4c0 with size: 0.000244 MiB 00:08:14.746 element at address: 0x2000008ff800 with size: 0.000244 MiB 00:08:14.746 element at address: 0x2000008ffa80 with size: 0.000244 MiB 00:08:14.746 element at address: 0x200000c7d3c0 with size: 0.000244 MiB 00:08:14.746 element at address: 0x200000c7d4c0 with size: 0.000244 MiB 00:08:14.746 element at address: 0x200000c7d5c0 with size: 0.000244 MiB 00:08:14.746 element at address: 0x200000c7d6c0 with size: 0.000244 MiB 00:08:14.746 element at address: 0x200000c7d7c0 with size: 0.000244 MiB 00:08:14.746 element at address: 0x200000c7d8c0 with size: 0.000244 MiB 00:08:14.746 element at address: 0x200000c7d9c0 with size: 0.000244 MiB 00:08:14.746 element at address: 0x200000c7dac0 with size: 0.000244 MiB 00:08:14.746 element at address: 0x200000c7dbc0 with size: 0.000244 MiB 00:08:14.746 element at address: 0x200000c7dcc0 with size: 0.000244 MiB 00:08:14.746 element at address: 0x200000c7ddc0 with size: 0.000244 MiB 00:08:14.746 element at address: 0x200000c7dec0 with size: 0.000244 MiB 00:08:14.746 element at address: 0x200000c7dfc0 with size: 0.000244 MiB 00:08:14.746 element at address: 0x200000c7e0c0 with size: 0.000244 MiB 00:08:14.746 element at address: 0x200000c7e1c0 with size: 0.000244 MiB 00:08:14.746 element at address: 0x200000c7e2c0 with size: 0.000244 MiB 00:08:14.746 element at address: 0x200000c7e3c0 with size: 0.000244 MiB 00:08:14.746 element at address: 0x200000c7e4c0 with size: 0.000244 MiB 00:08:14.746 element at address: 0x200000c7e5c0 with size: 0.000244 MiB 00:08:14.747 element at address: 0x200000c7e6c0 with size: 0.000244 MiB 00:08:14.747 element at address: 0x200000c7e7c0 with size: 0.000244 MiB 00:08:14.747 element at address: 0x200000c7e8c0 with size: 0.000244 MiB 00:08:14.747 element at address: 0x200000c7e9c0 with size: 0.000244 MiB 00:08:14.747 element at address: 0x200000c7eac0 with size: 0.000244 MiB 00:08:14.747 element at address: 0x200000c7ebc0 with size: 0.000244 MiB 00:08:14.747 element at address: 0x200000cfef00 with size: 0.000244 MiB 00:08:14.747 element at address: 0x200000cff000 with size: 0.000244 MiB 00:08:14.747 element at address: 0x20000a5ff200 with size: 0.000244 MiB 00:08:14.747 element at address: 0x20000a5ff300 with size: 0.000244 MiB 00:08:14.747 element at address: 0x20000a5ff400 with size: 0.000244 MiB 00:08:14.747 element at address: 0x20000a5ff500 with size: 0.000244 MiB 00:08:14.747 element at address: 0x20000a5ff600 with size: 0.000244 MiB 00:08:14.747 element at address: 0x20000a5ff700 with size: 0.000244 MiB 00:08:14.747 element at address: 0x20000a5ff800 with size: 0.000244 MiB 00:08:14.747 element at address: 0x20000a5ff900 with size: 0.000244 MiB 00:08:14.747 element at address: 0x20000a5ffa00 with size: 0.000244 MiB 00:08:14.747 element at address: 0x20000a5ffb00 with size: 0.000244 MiB 00:08:14.747 element at address: 0x20000a5ffc00 with size: 0.000244 MiB 00:08:14.747 element at address: 0x20000a5ffd00 with size: 0.000244 MiB 00:08:14.747 element at address: 0x20000a5ffe00 with size: 0.000244 MiB 00:08:14.747 element at address: 0x20000a5fff00 with size: 0.000244 MiB 00:08:14.747 element at address: 0x200012bff180 with size: 0.000244 MiB 00:08:14.747 element at address: 0x200012bff280 with size: 0.000244 MiB 00:08:14.747 element at address: 0x200012bff380 with size: 0.000244 MiB 00:08:14.747 element at address: 0x200012bff480 with size: 0.000244 MiB 00:08:14.747 element at address: 0x200012bff580 with size: 0.000244 MiB 00:08:14.747 element at address: 0x200012bff680 with size: 0.000244 MiB 00:08:14.747 element at address: 0x200012bff780 with size: 0.000244 MiB 00:08:14.747 element at address: 0x200012bff880 with size: 0.000244 MiB 00:08:14.747 element at address: 0x200012bff980 with size: 0.000244 MiB 00:08:14.747 element at address: 0x200012bffa80 with size: 0.000244 MiB 00:08:14.747 element at address: 0x200012bffb80 with size: 0.000244 MiB 00:08:14.747 element at address: 0x200012bffc80 with size: 0.000244 MiB 00:08:14.747 element at address: 0x200012bfff00 with size: 0.000244 MiB 00:08:14.747 element at address: 0x200012c6ee80 with size: 0.000244 MiB 00:08:14.747 element at address: 0x200012c6ef80 with size: 0.000244 MiB 00:08:14.747 element at address: 0x200012c6f080 with size: 0.000244 MiB 00:08:14.747 element at address: 0x200012c6f180 with size: 0.000244 MiB 00:08:14.747 element at address: 0x200012c6f280 with size: 0.000244 MiB 00:08:14.747 element at address: 0x200012c6f380 with size: 0.000244 MiB 00:08:14.747 element at address: 0x200012c6f480 with size: 0.000244 MiB 00:08:14.747 element at address: 0x200012c6f580 with size: 0.000244 MiB 00:08:14.747 element at address: 0x200012c6f680 with size: 0.000244 MiB 00:08:14.747 element at address: 0x200012c6f780 with size: 0.000244 MiB 00:08:14.747 element at address: 0x200012c6f880 with size: 0.000244 MiB 00:08:14.747 element at address: 0x200012cefbc0 with size: 0.000244 MiB 00:08:14.747 element at address: 0x2000192fdd00 with size: 0.000244 MiB 00:08:14.747 element at address: 0x20001967cfc0 with size: 0.000244 MiB 00:08:14.747 element at address: 0x20001967d0c0 with size: 0.000244 MiB 00:08:14.747 element at address: 0x20001967d1c0 with size: 0.000244 MiB 00:08:14.747 element at address: 0x20001967d2c0 with size: 0.000244 MiB 00:08:14.747 element at address: 0x20001967d3c0 with size: 0.000244 MiB 00:08:14.747 element at address: 0x20001967d4c0 with size: 0.000244 MiB 00:08:14.747 element at address: 0x20001967d5c0 with size: 0.000244 MiB 00:08:14.747 element at address: 0x20001967d6c0 with size: 0.000244 MiB 00:08:14.747 element at address: 0x20001967d7c0 with size: 0.000244 MiB 00:08:14.747 element at address: 0x20001967d8c0 with size: 0.000244 MiB 00:08:14.747 element at address: 0x20001967d9c0 with size: 0.000244 MiB 00:08:14.747 element at address: 0x2000196fdd00 with size: 0.000244 MiB 00:08:14.747 element at address: 0x200019affc40 with size: 0.000244 MiB 00:08:14.747 element at address: 0x200019defbc0 with size: 0.000244 MiB 00:08:14.747 element at address: 0x200019defcc0 with size: 0.000244 MiB 00:08:14.747 element at address: 0x200019ebc680 with size: 0.000244 MiB 00:08:14.747 element at address: 0x20001b48fbc0 with size: 0.000244 MiB 00:08:14.747 element at address: 0x20001b48fcc0 with size: 0.000244 MiB 00:08:14.747 element at address: 0x20001b48fdc0 with size: 0.000244 MiB 00:08:14.747 element at address: 0x20001b48fec0 with size: 0.000244 MiB 00:08:14.747 element at address: 0x20001b48ffc0 with size: 0.000244 MiB 00:08:14.747 element at address: 0x20001b4900c0 with size: 0.000244 MiB 00:08:14.747 element at address: 0x20001b4901c0 with size: 0.000244 MiB 00:08:14.747 element at address: 0x20001b4902c0 with size: 0.000244 MiB 00:08:14.747 element at address: 0x20001b4903c0 with size: 0.000244 MiB 00:08:14.747 element at address: 0x20001b4904c0 with size: 0.000244 MiB 00:08:14.747 element at address: 0x20001b4905c0 with size: 0.000244 MiB 00:08:14.747 element at address: 0x20001b4906c0 with size: 0.000244 MiB 00:08:14.747 element at address: 0x20001b4907c0 with size: 0.000244 MiB 00:08:14.747 element at address: 0x20001b4908c0 with size: 0.000244 MiB 00:08:14.747 element at address: 0x20001b4909c0 with size: 0.000244 MiB 00:08:14.747 element at address: 0x20001b490ac0 with size: 0.000244 MiB 00:08:14.747 element at address: 0x20001b490bc0 with size: 0.000244 MiB 00:08:14.747 element at address: 0x20001b490cc0 with size: 0.000244 MiB 00:08:14.747 element at address: 0x20001b490dc0 with size: 0.000244 MiB 00:08:14.747 element at address: 0x20001b490ec0 with size: 0.000244 MiB 00:08:14.747 element at address: 0x20001b490fc0 with size: 0.000244 MiB 00:08:14.747 element at address: 0x20001b4910c0 with size: 0.000244 MiB 00:08:14.747 element at address: 0x20001b4911c0 with size: 0.000244 MiB 00:08:14.747 element at address: 0x20001b4912c0 with size: 0.000244 MiB 00:08:14.747 element at address: 0x20001b4913c0 with size: 0.000244 MiB 00:08:14.747 element at address: 0x20001b4914c0 with size: 0.000244 MiB 00:08:14.747 element at address: 0x20001b4915c0 with size: 0.000244 MiB 00:08:14.747 element at address: 0x20001b4916c0 with size: 0.000244 MiB 00:08:14.747 element at address: 0x20001b4917c0 with size: 0.000244 MiB 00:08:14.747 element at address: 0x20001b4918c0 with size: 0.000244 MiB 00:08:14.747 element at address: 0x20001b4919c0 with size: 0.000244 MiB 00:08:14.747 element at address: 0x20001b491ac0 with size: 0.000244 MiB 00:08:14.747 element at address: 0x20001b491bc0 with size: 0.000244 MiB 00:08:14.747 element at address: 0x20001b491cc0 with size: 0.000244 MiB 00:08:14.747 element at address: 0x20001b491dc0 with size: 0.000244 MiB 00:08:14.747 element at address: 0x20001b491ec0 with size: 0.000244 MiB 00:08:14.747 element at address: 0x20001b491fc0 with size: 0.000244 MiB 00:08:14.747 element at address: 0x20001b4920c0 with size: 0.000244 MiB 00:08:14.747 element at address: 0x20001b4921c0 with size: 0.000244 MiB 00:08:14.747 element at address: 0x20001b4922c0 with size: 0.000244 MiB 00:08:14.747 element at address: 0x20001b4923c0 with size: 0.000244 MiB 00:08:14.747 element at address: 0x20001b4924c0 with size: 0.000244 MiB 00:08:14.747 element at address: 0x20001b4925c0 with size: 0.000244 MiB 00:08:14.747 element at address: 0x20001b4926c0 with size: 0.000244 MiB 00:08:14.747 element at address: 0x20001b4927c0 with size: 0.000244 MiB 00:08:14.747 element at address: 0x20001b4928c0 with size: 0.000244 MiB 00:08:14.747 element at address: 0x20001b4929c0 with size: 0.000244 MiB 00:08:14.747 element at address: 0x20001b492ac0 with size: 0.000244 MiB 00:08:14.747 element at address: 0x20001b492bc0 with size: 0.000244 MiB 00:08:14.747 element at address: 0x20001b492cc0 with size: 0.000244 MiB 00:08:14.747 element at address: 0x20001b492dc0 with size: 0.000244 MiB 00:08:14.747 element at address: 0x20001b492ec0 with size: 0.000244 MiB 00:08:14.747 element at address: 0x20001b492fc0 with size: 0.000244 MiB 00:08:14.747 element at address: 0x20001b4930c0 with size: 0.000244 MiB 00:08:14.747 element at address: 0x20001b4931c0 with size: 0.000244 MiB 00:08:14.747 element at address: 0x20001b4932c0 with size: 0.000244 MiB 00:08:14.747 element at address: 0x20001b4933c0 with size: 0.000244 MiB 00:08:14.747 element at address: 0x20001b4934c0 with size: 0.000244 MiB 00:08:14.747 element at address: 0x20001b4935c0 with size: 0.000244 MiB 00:08:14.747 element at address: 0x20001b4936c0 with size: 0.000244 MiB 00:08:14.747 element at address: 0x20001b4937c0 with size: 0.000244 MiB 00:08:14.747 element at address: 0x20001b4938c0 with size: 0.000244 MiB 00:08:14.747 element at address: 0x20001b4939c0 with size: 0.000244 MiB 00:08:14.747 element at address: 0x20001b493ac0 with size: 0.000244 MiB 00:08:14.747 element at address: 0x20001b493bc0 with size: 0.000244 MiB 00:08:14.747 element at address: 0x20001b493cc0 with size: 0.000244 MiB 00:08:14.747 element at address: 0x20001b493dc0 with size: 0.000244 MiB 00:08:14.747 element at address: 0x20001b493ec0 with size: 0.000244 MiB 00:08:14.747 element at address: 0x20001b493fc0 with size: 0.000244 MiB 00:08:14.747 element at address: 0x20001b4940c0 with size: 0.000244 MiB 00:08:14.747 element at address: 0x20001b4941c0 with size: 0.000244 MiB 00:08:14.747 element at address: 0x20001b4942c0 with size: 0.000244 MiB 00:08:14.747 element at address: 0x20001b4943c0 with size: 0.000244 MiB 00:08:14.747 element at address: 0x20001b4944c0 with size: 0.000244 MiB 00:08:14.747 element at address: 0x20001b4945c0 with size: 0.000244 MiB 00:08:14.747 element at address: 0x20001b4946c0 with size: 0.000244 MiB 00:08:14.747 element at address: 0x20001b4947c0 with size: 0.000244 MiB 00:08:14.747 element at address: 0x20001b4948c0 with size: 0.000244 MiB 00:08:14.747 element at address: 0x20001b4949c0 with size: 0.000244 MiB 00:08:14.747 element at address: 0x20001b494ac0 with size: 0.000244 MiB 00:08:14.747 element at address: 0x20001b494bc0 with size: 0.000244 MiB 00:08:14.747 element at address: 0x20001b494cc0 with size: 0.000244 MiB 00:08:14.747 element at address: 0x20001b494dc0 with size: 0.000244 MiB 00:08:14.747 element at address: 0x20001b494ec0 with size: 0.000244 MiB 00:08:14.747 element at address: 0x20001b494fc0 with size: 0.000244 MiB 00:08:14.747 element at address: 0x20001b4950c0 with size: 0.000244 MiB 00:08:14.747 element at address: 0x20001b4951c0 with size: 0.000244 MiB 00:08:14.747 element at address: 0x20001b4952c0 with size: 0.000244 MiB 00:08:14.747 element at address: 0x20001b4953c0 with size: 0.000244 MiB 00:08:14.747 element at address: 0x200028863f40 with size: 0.000244 MiB 00:08:14.747 element at address: 0x200028864040 with size: 0.000244 MiB 00:08:14.747 element at address: 0x20002886ad00 with size: 0.000244 MiB 00:08:14.747 element at address: 0x20002886af80 with size: 0.000244 MiB 00:08:14.747 element at address: 0x20002886b080 with size: 0.000244 MiB 00:08:14.747 element at address: 0x20002886b180 with size: 0.000244 MiB 00:08:14.747 element at address: 0x20002886b280 with size: 0.000244 MiB 00:08:14.747 element at address: 0x20002886b380 with size: 0.000244 MiB 00:08:14.747 element at address: 0x20002886b480 with size: 0.000244 MiB 00:08:14.747 element at address: 0x20002886b580 with size: 0.000244 MiB 00:08:14.747 element at address: 0x20002886b680 with size: 0.000244 MiB 00:08:14.747 element at address: 0x20002886b780 with size: 0.000244 MiB 00:08:14.747 element at address: 0x20002886b880 with size: 0.000244 MiB 00:08:14.747 element at address: 0x20002886b980 with size: 0.000244 MiB 00:08:14.747 element at address: 0x20002886ba80 with size: 0.000244 MiB 00:08:14.747 element at address: 0x20002886bb80 with size: 0.000244 MiB 00:08:14.747 element at address: 0x20002886bc80 with size: 0.000244 MiB 00:08:14.747 element at address: 0x20002886bd80 with size: 0.000244 MiB 00:08:14.747 element at address: 0x20002886be80 with size: 0.000244 MiB 00:08:14.747 element at address: 0x20002886bf80 with size: 0.000244 MiB 00:08:14.747 element at address: 0x20002886c080 with size: 0.000244 MiB 00:08:14.747 element at address: 0x20002886c180 with size: 0.000244 MiB 00:08:14.747 element at address: 0x20002886c280 with size: 0.000244 MiB 00:08:14.747 element at address: 0x20002886c380 with size: 0.000244 MiB 00:08:14.747 element at address: 0x20002886c480 with size: 0.000244 MiB 00:08:14.747 element at address: 0x20002886c580 with size: 0.000244 MiB 00:08:14.747 element at address: 0x20002886c680 with size: 0.000244 MiB 00:08:14.747 element at address: 0x20002886c780 with size: 0.000244 MiB 00:08:14.747 element at address: 0x20002886c880 with size: 0.000244 MiB 00:08:14.747 element at address: 0x20002886c980 with size: 0.000244 MiB 00:08:14.747 element at address: 0x20002886ca80 with size: 0.000244 MiB 00:08:14.747 element at address: 0x20002886cb80 with size: 0.000244 MiB 00:08:14.747 element at address: 0x20002886cc80 with size: 0.000244 MiB 00:08:14.747 element at address: 0x20002886cd80 with size: 0.000244 MiB 00:08:14.747 element at address: 0x20002886ce80 with size: 0.000244 MiB 00:08:14.747 element at address: 0x20002886cf80 with size: 0.000244 MiB 00:08:14.747 element at address: 0x20002886d080 with size: 0.000244 MiB 00:08:14.747 element at address: 0x20002886d180 with size: 0.000244 MiB 00:08:14.747 element at address: 0x20002886d280 with size: 0.000244 MiB 00:08:14.747 element at address: 0x20002886d380 with size: 0.000244 MiB 00:08:14.747 element at address: 0x20002886d480 with size: 0.000244 MiB 00:08:14.747 element at address: 0x20002886d580 with size: 0.000244 MiB 00:08:14.747 element at address: 0x20002886d680 with size: 0.000244 MiB 00:08:14.747 element at address: 0x20002886d780 with size: 0.000244 MiB 00:08:14.747 element at address: 0x20002886d880 with size: 0.000244 MiB 00:08:14.747 element at address: 0x20002886d980 with size: 0.000244 MiB 00:08:14.747 element at address: 0x20002886da80 with size: 0.000244 MiB 00:08:14.747 element at address: 0x20002886db80 with size: 0.000244 MiB 00:08:14.747 element at address: 0x20002886dc80 with size: 0.000244 MiB 00:08:14.747 element at address: 0x20002886dd80 with size: 0.000244 MiB 00:08:14.747 element at address: 0x20002886de80 with size: 0.000244 MiB 00:08:14.747 element at address: 0x20002886df80 with size: 0.000244 MiB 00:08:14.747 element at address: 0x20002886e080 with size: 0.000244 MiB 00:08:14.747 element at address: 0x20002886e180 with size: 0.000244 MiB 00:08:14.747 element at address: 0x20002886e280 with size: 0.000244 MiB 00:08:14.747 element at address: 0x20002886e380 with size: 0.000244 MiB 00:08:14.747 element at address: 0x20002886e480 with size: 0.000244 MiB 00:08:14.747 element at address: 0x20002886e580 with size: 0.000244 MiB 00:08:14.747 element at address: 0x20002886e680 with size: 0.000244 MiB 00:08:14.747 element at address: 0x20002886e780 with size: 0.000244 MiB 00:08:14.747 element at address: 0x20002886e880 with size: 0.000244 MiB 00:08:14.747 element at address: 0x20002886e980 with size: 0.000244 MiB 00:08:14.747 element at address: 0x20002886ea80 with size: 0.000244 MiB 00:08:14.747 element at address: 0x20002886eb80 with size: 0.000244 MiB 00:08:14.747 element at address: 0x20002886ec80 with size: 0.000244 MiB 00:08:14.747 element at address: 0x20002886ed80 with size: 0.000244 MiB 00:08:14.747 element at address: 0x20002886ee80 with size: 0.000244 MiB 00:08:14.747 element at address: 0x20002886ef80 with size: 0.000244 MiB 00:08:14.747 element at address: 0x20002886f080 with size: 0.000244 MiB 00:08:14.747 element at address: 0x20002886f180 with size: 0.000244 MiB 00:08:14.747 element at address: 0x20002886f280 with size: 0.000244 MiB 00:08:14.747 element at address: 0x20002886f380 with size: 0.000244 MiB 00:08:14.747 element at address: 0x20002886f480 with size: 0.000244 MiB 00:08:14.747 element at address: 0x20002886f580 with size: 0.000244 MiB 00:08:14.747 element at address: 0x20002886f680 with size: 0.000244 MiB 00:08:14.747 element at address: 0x20002886f780 with size: 0.000244 MiB 00:08:14.747 element at address: 0x20002886f880 with size: 0.000244 MiB 00:08:14.747 element at address: 0x20002886f980 with size: 0.000244 MiB 00:08:14.747 element at address: 0x20002886fa80 with size: 0.000244 MiB 00:08:14.747 element at address: 0x20002886fb80 with size: 0.000244 MiB 00:08:14.747 element at address: 0x20002886fc80 with size: 0.000244 MiB 00:08:14.747 element at address: 0x20002886fd80 with size: 0.000244 MiB 00:08:14.747 element at address: 0x20002886fe80 with size: 0.000244 MiB 00:08:14.747 list of memzone associated elements. size: 607.930908 MiB 00:08:14.747 element at address: 0x20001b4954c0 with size: 211.416809 MiB 00:08:14.747 associated memzone info: size: 211.416626 MiB name: MP_PDU_immediate_data_Pool_0 00:08:14.747 element at address: 0x20002886ff80 with size: 157.562622 MiB 00:08:14.747 associated memzone info: size: 157.562439 MiB name: MP_PDU_data_out_Pool_0 00:08:14.747 element at address: 0x200012df1e40 with size: 100.055115 MiB 00:08:14.747 associated memzone info: size: 100.054932 MiB name: MP_bdev_io_58895_0 00:08:14.747 element at address: 0x200000dff340 with size: 48.003113 MiB 00:08:14.747 associated memzone info: size: 48.002930 MiB name: MP_msgpool_58895_0 00:08:14.747 element at address: 0x200003ffdb40 with size: 36.008972 MiB 00:08:14.747 associated memzone info: size: 36.008789 MiB name: MP_fsdev_io_58895_0 00:08:14.747 element at address: 0x200019fbe900 with size: 20.255615 MiB 00:08:14.747 associated memzone info: size: 20.255432 MiB name: MP_PDU_Pool_0 00:08:14.747 element at address: 0x2000327feb00 with size: 18.005127 MiB 00:08:14.747 associated memzone info: size: 18.004944 MiB name: MP_SCSI_TASK_Pool_0 00:08:14.747 element at address: 0x2000004ffec0 with size: 3.000305 MiB 00:08:14.747 associated memzone info: size: 3.000122 MiB name: MP_evtpool_58895_0 00:08:14.747 element at address: 0x2000009ffdc0 with size: 2.000549 MiB 00:08:14.747 associated memzone info: size: 2.000366 MiB name: RG_MP_msgpool_58895 00:08:14.747 element at address: 0x2000002d7c00 with size: 1.008179 MiB 00:08:14.747 associated memzone info: size: 1.007996 MiB name: MP_evtpool_58895 00:08:14.747 element at address: 0x2000196fde00 with size: 1.008179 MiB 00:08:14.747 associated memzone info: size: 1.007996 MiB name: MP_PDU_Pool 00:08:14.747 element at address: 0x200019ebc780 with size: 1.008179 MiB 00:08:14.747 associated memzone info: size: 1.007996 MiB name: MP_PDU_immediate_data_Pool 00:08:14.747 element at address: 0x2000192fde00 with size: 1.008179 MiB 00:08:14.747 associated memzone info: size: 1.007996 MiB name: MP_PDU_data_out_Pool 00:08:14.747 element at address: 0x200012cefcc0 with size: 1.008179 MiB 00:08:14.747 associated memzone info: size: 1.007996 MiB name: MP_SCSI_TASK_Pool 00:08:14.747 element at address: 0x200000cff100 with size: 1.000549 MiB 00:08:14.747 associated memzone info: size: 1.000366 MiB name: RG_ring_0_58895 00:08:14.747 element at address: 0x2000008ffb80 with size: 1.000549 MiB 00:08:14.747 associated memzone info: size: 1.000366 MiB name: RG_ring_1_58895 00:08:14.747 element at address: 0x200019affd40 with size: 1.000549 MiB 00:08:14.747 associated memzone info: size: 1.000366 MiB name: RG_ring_4_58895 00:08:14.747 element at address: 0x2000326fe8c0 with size: 1.000549 MiB 00:08:14.747 associated memzone info: size: 1.000366 MiB name: RG_ring_5_58895 00:08:14.747 element at address: 0x20000087f5c0 with size: 0.500549 MiB 00:08:14.747 associated memzone info: size: 0.500366 MiB name: RG_MP_fsdev_io_58895 00:08:14.747 element at address: 0x200000c7ecc0 with size: 0.500549 MiB 00:08:14.748 associated memzone info: size: 0.500366 MiB name: RG_MP_bdev_io_58895 00:08:14.748 element at address: 0x20001967dac0 with size: 0.500549 MiB 00:08:14.748 associated memzone info: size: 0.500366 MiB name: RG_MP_PDU_Pool 00:08:14.748 element at address: 0x200012c6f980 with size: 0.500549 MiB 00:08:14.748 associated memzone info: size: 0.500366 MiB name: RG_MP_SCSI_TASK_Pool 00:08:14.748 element at address: 0x200019e7c440 with size: 0.250549 MiB 00:08:14.748 associated memzone info: size: 0.250366 MiB name: RG_MP_PDU_immediate_data_Pool 00:08:14.748 element at address: 0x2000002b78c0 with size: 0.125549 MiB 00:08:14.748 associated memzone info: size: 0.125366 MiB name: RG_MP_evtpool_58895 00:08:14.748 element at address: 0x20000085df80 with size: 0.125549 MiB 00:08:14.748 associated memzone info: size: 0.125366 MiB name: RG_ring_2_58895 00:08:14.748 element at address: 0x2000192f5ac0 with size: 0.031799 MiB 00:08:14.748 associated memzone info: size: 0.031616 MiB name: RG_MP_PDU_data_out_Pool 00:08:14.748 element at address: 0x200028864140 with size: 0.023804 MiB 00:08:14.748 associated memzone info: size: 0.023621 MiB name: MP_Session_Pool_0 00:08:14.748 element at address: 0x200000859d40 with size: 0.016174 MiB 00:08:14.748 associated memzone info: size: 0.015991 MiB name: RG_ring_3_58895 00:08:14.748 element at address: 0x20002886a2c0 with size: 0.002502 MiB 00:08:14.748 associated memzone info: size: 0.002319 MiB name: RG_MP_Session_Pool 00:08:14.748 element at address: 0x2000004ffa40 with size: 0.000366 MiB 00:08:14.748 associated memzone info: size: 0.000183 MiB name: MP_msgpool_58895 00:08:14.748 element at address: 0x2000008ff900 with size: 0.000366 MiB 00:08:14.748 associated memzone info: size: 0.000183 MiB name: MP_fsdev_io_58895 00:08:14.748 element at address: 0x200012bffd80 with size: 0.000366 MiB 00:08:14.748 associated memzone info: size: 0.000183 MiB name: MP_bdev_io_58895 00:08:14.748 element at address: 0x20002886ae00 with size: 0.000366 MiB 00:08:14.748 associated memzone info: size: 0.000183 MiB name: MP_Session_Pool 00:08:14.748 18:06:25 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@25 -- # trap - SIGINT SIGTERM EXIT 00:08:14.748 18:06:25 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@26 -- # killprocess 58895 00:08:14.748 18:06:25 dpdk_mem_utility -- common/autotest_common.sh@954 -- # '[' -z 58895 ']' 00:08:14.748 18:06:25 dpdk_mem_utility -- common/autotest_common.sh@958 -- # kill -0 58895 00:08:14.748 18:06:25 dpdk_mem_utility -- common/autotest_common.sh@959 -- # uname 00:08:14.748 18:06:25 dpdk_mem_utility -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:08:14.748 18:06:25 dpdk_mem_utility -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 58895 00:08:14.748 18:06:25 dpdk_mem_utility -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:08:14.748 18:06:25 dpdk_mem_utility -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:08:14.748 18:06:25 dpdk_mem_utility -- common/autotest_common.sh@972 -- # echo 'killing process with pid 58895' 00:08:14.748 killing process with pid 58895 00:08:14.748 18:06:25 dpdk_mem_utility -- common/autotest_common.sh@973 -- # kill 58895 00:08:14.748 18:06:25 dpdk_mem_utility -- common/autotest_common.sh@978 -- # wait 58895 00:08:17.287 00:08:17.287 real 0m4.128s 00:08:17.287 user 0m3.969s 00:08:17.288 sys 0m0.627s 00:08:17.288 ************************************ 00:08:17.288 END TEST dpdk_mem_utility 00:08:17.288 ************************************ 00:08:17.288 18:06:27 dpdk_mem_utility -- common/autotest_common.sh@1130 -- # xtrace_disable 00:08:17.288 18:06:27 dpdk_mem_utility -- common/autotest_common.sh@10 -- # set +x 00:08:17.288 18:06:27 -- spdk/autotest.sh@168 -- # run_test event /home/vagrant/spdk_repo/spdk/test/event/event.sh 00:08:17.288 18:06:27 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:08:17.288 18:06:27 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:08:17.288 18:06:27 -- common/autotest_common.sh@10 -- # set +x 00:08:17.288 ************************************ 00:08:17.288 START TEST event 00:08:17.288 ************************************ 00:08:17.288 18:06:27 event -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/event/event.sh 00:08:17.288 * Looking for test storage... 00:08:17.288 * Found test storage at /home/vagrant/spdk_repo/spdk/test/event 00:08:17.288 18:06:27 event -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:08:17.288 18:06:27 event -- common/autotest_common.sh@1711 -- # lcov --version 00:08:17.288 18:06:27 event -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:08:17.288 18:06:27 event -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:08:17.288 18:06:27 event -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:08:17.288 18:06:27 event -- scripts/common.sh@333 -- # local ver1 ver1_l 00:08:17.288 18:06:27 event -- scripts/common.sh@334 -- # local ver2 ver2_l 00:08:17.288 18:06:27 event -- scripts/common.sh@336 -- # IFS=.-: 00:08:17.288 18:06:27 event -- scripts/common.sh@336 -- # read -ra ver1 00:08:17.288 18:06:27 event -- scripts/common.sh@337 -- # IFS=.-: 00:08:17.288 18:06:27 event -- scripts/common.sh@337 -- # read -ra ver2 00:08:17.288 18:06:27 event -- scripts/common.sh@338 -- # local 'op=<' 00:08:17.288 18:06:27 event -- scripts/common.sh@340 -- # ver1_l=2 00:08:17.288 18:06:27 event -- scripts/common.sh@341 -- # ver2_l=1 00:08:17.288 18:06:27 event -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:08:17.288 18:06:27 event -- scripts/common.sh@344 -- # case "$op" in 00:08:17.288 18:06:27 event -- scripts/common.sh@345 -- # : 1 00:08:17.288 18:06:27 event -- scripts/common.sh@364 -- # (( v = 0 )) 00:08:17.288 18:06:27 event -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:08:17.288 18:06:27 event -- scripts/common.sh@365 -- # decimal 1 00:08:17.288 18:06:27 event -- scripts/common.sh@353 -- # local d=1 00:08:17.288 18:06:27 event -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:08:17.288 18:06:27 event -- scripts/common.sh@355 -- # echo 1 00:08:17.288 18:06:27 event -- scripts/common.sh@365 -- # ver1[v]=1 00:08:17.288 18:06:27 event -- scripts/common.sh@366 -- # decimal 2 00:08:17.288 18:06:27 event -- scripts/common.sh@353 -- # local d=2 00:08:17.288 18:06:27 event -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:08:17.288 18:06:27 event -- scripts/common.sh@355 -- # echo 2 00:08:17.288 18:06:27 event -- scripts/common.sh@366 -- # ver2[v]=2 00:08:17.288 18:06:27 event -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:08:17.288 18:06:27 event -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:08:17.288 18:06:27 event -- scripts/common.sh@368 -- # return 0 00:08:17.288 18:06:27 event -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:08:17.288 18:06:27 event -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:08:17.288 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:17.288 --rc genhtml_branch_coverage=1 00:08:17.288 --rc genhtml_function_coverage=1 00:08:17.288 --rc genhtml_legend=1 00:08:17.288 --rc geninfo_all_blocks=1 00:08:17.288 --rc geninfo_unexecuted_blocks=1 00:08:17.288 00:08:17.288 ' 00:08:17.288 18:06:27 event -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:08:17.288 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:17.288 --rc genhtml_branch_coverage=1 00:08:17.288 --rc genhtml_function_coverage=1 00:08:17.288 --rc genhtml_legend=1 00:08:17.288 --rc geninfo_all_blocks=1 00:08:17.288 --rc geninfo_unexecuted_blocks=1 00:08:17.288 00:08:17.288 ' 00:08:17.288 18:06:27 event -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:08:17.288 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:17.288 --rc genhtml_branch_coverage=1 00:08:17.288 --rc genhtml_function_coverage=1 00:08:17.288 --rc genhtml_legend=1 00:08:17.288 --rc geninfo_all_blocks=1 00:08:17.288 --rc geninfo_unexecuted_blocks=1 00:08:17.288 00:08:17.288 ' 00:08:17.288 18:06:27 event -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:08:17.288 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:17.288 --rc genhtml_branch_coverage=1 00:08:17.288 --rc genhtml_function_coverage=1 00:08:17.288 --rc genhtml_legend=1 00:08:17.288 --rc geninfo_all_blocks=1 00:08:17.288 --rc geninfo_unexecuted_blocks=1 00:08:17.288 00:08:17.288 ' 00:08:17.288 18:06:27 event -- event/event.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/bdev/nbd_common.sh 00:08:17.288 18:06:27 event -- bdev/nbd_common.sh@6 -- # set -e 00:08:17.288 18:06:27 event -- event/event.sh@45 -- # run_test event_perf /home/vagrant/spdk_repo/spdk/test/event/event_perf/event_perf -m 0xF -t 1 00:08:17.288 18:06:27 event -- common/autotest_common.sh@1105 -- # '[' 6 -le 1 ']' 00:08:17.288 18:06:27 event -- common/autotest_common.sh@1111 -- # xtrace_disable 00:08:17.288 18:06:27 event -- common/autotest_common.sh@10 -- # set +x 00:08:17.288 ************************************ 00:08:17.288 START TEST event_perf 00:08:17.288 ************************************ 00:08:17.288 18:06:27 event.event_perf -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/event/event_perf/event_perf -m 0xF -t 1 00:08:17.547 Running I/O for 1 seconds...[2024-12-06 18:06:27.899840] Starting SPDK v25.01-pre git sha1 1148849d6 / DPDK 24.03.0 initialization... 00:08:17.547 [2024-12-06 18:06:27.900052] [ DPDK EAL parameters: event_perf --no-shconf -c 0xF --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid59003 ] 00:08:17.547 [2024-12-06 18:06:28.079482] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:08:17.807 [2024-12-06 18:06:28.202096] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:08:17.807 [2024-12-06 18:06:28.202295] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:08:17.807 Running I/O for 1 seconds...[2024-12-06 18:06:28.202305] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:08:17.807 [2024-12-06 18:06:28.202300] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:08:19.185 00:08:19.185 lcore 0: 204203 00:08:19.185 lcore 1: 204202 00:08:19.185 lcore 2: 204203 00:08:19.185 lcore 3: 204202 00:08:19.185 done. 00:08:19.185 ************************************ 00:08:19.185 END TEST event_perf 00:08:19.185 ************************************ 00:08:19.185 00:08:19.185 real 0m1.601s 00:08:19.185 user 0m4.367s 00:08:19.185 sys 0m0.111s 00:08:19.185 18:06:29 event.event_perf -- common/autotest_common.sh@1130 -- # xtrace_disable 00:08:19.185 18:06:29 event.event_perf -- common/autotest_common.sh@10 -- # set +x 00:08:19.185 18:06:29 event -- event/event.sh@46 -- # run_test event_reactor /home/vagrant/spdk_repo/spdk/test/event/reactor/reactor -t 1 00:08:19.185 18:06:29 event -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:08:19.185 18:06:29 event -- common/autotest_common.sh@1111 -- # xtrace_disable 00:08:19.185 18:06:29 event -- common/autotest_common.sh@10 -- # set +x 00:08:19.185 ************************************ 00:08:19.185 START TEST event_reactor 00:08:19.185 ************************************ 00:08:19.185 18:06:29 event.event_reactor -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/event/reactor/reactor -t 1 00:08:19.185 [2024-12-06 18:06:29.565119] Starting SPDK v25.01-pre git sha1 1148849d6 / DPDK 24.03.0 initialization... 00:08:19.185 [2024-12-06 18:06:29.565454] [ DPDK EAL parameters: reactor --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid59048 ] 00:08:19.185 [2024-12-06 18:06:29.748024] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:19.445 [2024-12-06 18:06:29.864082] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:08:20.864 test_start 00:08:20.864 oneshot 00:08:20.864 tick 100 00:08:20.864 tick 100 00:08:20.864 tick 250 00:08:20.864 tick 100 00:08:20.864 tick 100 00:08:20.864 tick 100 00:08:20.864 tick 250 00:08:20.864 tick 500 00:08:20.864 tick 100 00:08:20.864 tick 100 00:08:20.864 tick 250 00:08:20.864 tick 100 00:08:20.864 tick 100 00:08:20.864 test_end 00:08:20.864 ************************************ 00:08:20.864 END TEST event_reactor 00:08:20.864 ************************************ 00:08:20.864 00:08:20.864 real 0m1.580s 00:08:20.864 user 0m1.348s 00:08:20.864 sys 0m0.122s 00:08:20.864 18:06:31 event.event_reactor -- common/autotest_common.sh@1130 -- # xtrace_disable 00:08:20.864 18:06:31 event.event_reactor -- common/autotest_common.sh@10 -- # set +x 00:08:20.864 18:06:31 event -- event/event.sh@47 -- # run_test event_reactor_perf /home/vagrant/spdk_repo/spdk/test/event/reactor_perf/reactor_perf -t 1 00:08:20.864 18:06:31 event -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:08:20.864 18:06:31 event -- common/autotest_common.sh@1111 -- # xtrace_disable 00:08:20.864 18:06:31 event -- common/autotest_common.sh@10 -- # set +x 00:08:20.864 ************************************ 00:08:20.864 START TEST event_reactor_perf 00:08:20.864 ************************************ 00:08:20.864 18:06:31 event.event_reactor_perf -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/event/reactor_perf/reactor_perf -t 1 00:08:20.864 [2024-12-06 18:06:31.193798] Starting SPDK v25.01-pre git sha1 1148849d6 / DPDK 24.03.0 initialization... 00:08:20.864 [2024-12-06 18:06:31.194085] [ DPDK EAL parameters: reactor_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid59084 ] 00:08:20.865 [2024-12-06 18:06:31.375535] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:21.124 [2024-12-06 18:06:31.494920] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:08:22.503 test_start 00:08:22.503 test_end 00:08:22.503 Performance: 378036 events per second 00:08:22.503 ************************************ 00:08:22.503 END TEST event_reactor_perf 00:08:22.503 ************************************ 00:08:22.503 00:08:22.503 real 0m1.570s 00:08:22.503 user 0m1.351s 00:08:22.503 sys 0m0.109s 00:08:22.503 18:06:32 event.event_reactor_perf -- common/autotest_common.sh@1130 -- # xtrace_disable 00:08:22.503 18:06:32 event.event_reactor_perf -- common/autotest_common.sh@10 -- # set +x 00:08:22.503 18:06:32 event -- event/event.sh@49 -- # uname -s 00:08:22.503 18:06:32 event -- event/event.sh@49 -- # '[' Linux = Linux ']' 00:08:22.503 18:06:32 event -- event/event.sh@50 -- # run_test event_scheduler /home/vagrant/spdk_repo/spdk/test/event/scheduler/scheduler.sh 00:08:22.503 18:06:32 event -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:08:22.503 18:06:32 event -- common/autotest_common.sh@1111 -- # xtrace_disable 00:08:22.503 18:06:32 event -- common/autotest_common.sh@10 -- # set +x 00:08:22.503 ************************************ 00:08:22.503 START TEST event_scheduler 00:08:22.503 ************************************ 00:08:22.503 18:06:32 event.event_scheduler -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/event/scheduler/scheduler.sh 00:08:22.503 * Looking for test storage... 00:08:22.503 * Found test storage at /home/vagrant/spdk_repo/spdk/test/event/scheduler 00:08:22.503 18:06:32 event.event_scheduler -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:08:22.503 18:06:32 event.event_scheduler -- common/autotest_common.sh@1711 -- # lcov --version 00:08:22.503 18:06:32 event.event_scheduler -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:08:22.503 18:06:33 event.event_scheduler -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:08:22.503 18:06:33 event.event_scheduler -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:08:22.503 18:06:33 event.event_scheduler -- scripts/common.sh@333 -- # local ver1 ver1_l 00:08:22.503 18:06:33 event.event_scheduler -- scripts/common.sh@334 -- # local ver2 ver2_l 00:08:22.503 18:06:33 event.event_scheduler -- scripts/common.sh@336 -- # IFS=.-: 00:08:22.503 18:06:33 event.event_scheduler -- scripts/common.sh@336 -- # read -ra ver1 00:08:22.503 18:06:33 event.event_scheduler -- scripts/common.sh@337 -- # IFS=.-: 00:08:22.503 18:06:33 event.event_scheduler -- scripts/common.sh@337 -- # read -ra ver2 00:08:22.503 18:06:33 event.event_scheduler -- scripts/common.sh@338 -- # local 'op=<' 00:08:22.503 18:06:33 event.event_scheduler -- scripts/common.sh@340 -- # ver1_l=2 00:08:22.503 18:06:33 event.event_scheduler -- scripts/common.sh@341 -- # ver2_l=1 00:08:22.503 18:06:33 event.event_scheduler -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:08:22.503 18:06:33 event.event_scheduler -- scripts/common.sh@344 -- # case "$op" in 00:08:22.503 18:06:33 event.event_scheduler -- scripts/common.sh@345 -- # : 1 00:08:22.503 18:06:33 event.event_scheduler -- scripts/common.sh@364 -- # (( v = 0 )) 00:08:22.503 18:06:33 event.event_scheduler -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:08:22.503 18:06:33 event.event_scheduler -- scripts/common.sh@365 -- # decimal 1 00:08:22.503 18:06:33 event.event_scheduler -- scripts/common.sh@353 -- # local d=1 00:08:22.503 18:06:33 event.event_scheduler -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:08:22.503 18:06:33 event.event_scheduler -- scripts/common.sh@355 -- # echo 1 00:08:22.503 18:06:33 event.event_scheduler -- scripts/common.sh@365 -- # ver1[v]=1 00:08:22.503 18:06:33 event.event_scheduler -- scripts/common.sh@366 -- # decimal 2 00:08:22.503 18:06:33 event.event_scheduler -- scripts/common.sh@353 -- # local d=2 00:08:22.503 18:06:33 event.event_scheduler -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:08:22.503 18:06:33 event.event_scheduler -- scripts/common.sh@355 -- # echo 2 00:08:22.503 18:06:33 event.event_scheduler -- scripts/common.sh@366 -- # ver2[v]=2 00:08:22.503 18:06:33 event.event_scheduler -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:08:22.503 18:06:33 event.event_scheduler -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:08:22.503 18:06:33 event.event_scheduler -- scripts/common.sh@368 -- # return 0 00:08:22.503 18:06:33 event.event_scheduler -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:08:22.503 18:06:33 event.event_scheduler -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:08:22.503 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:22.503 --rc genhtml_branch_coverage=1 00:08:22.503 --rc genhtml_function_coverage=1 00:08:22.503 --rc genhtml_legend=1 00:08:22.503 --rc geninfo_all_blocks=1 00:08:22.503 --rc geninfo_unexecuted_blocks=1 00:08:22.503 00:08:22.503 ' 00:08:22.503 18:06:33 event.event_scheduler -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:08:22.503 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:22.503 --rc genhtml_branch_coverage=1 00:08:22.503 --rc genhtml_function_coverage=1 00:08:22.503 --rc genhtml_legend=1 00:08:22.503 --rc geninfo_all_blocks=1 00:08:22.503 --rc geninfo_unexecuted_blocks=1 00:08:22.503 00:08:22.503 ' 00:08:22.503 18:06:33 event.event_scheduler -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:08:22.503 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:22.503 --rc genhtml_branch_coverage=1 00:08:22.503 --rc genhtml_function_coverage=1 00:08:22.503 --rc genhtml_legend=1 00:08:22.503 --rc geninfo_all_blocks=1 00:08:22.503 --rc geninfo_unexecuted_blocks=1 00:08:22.503 00:08:22.503 ' 00:08:22.503 18:06:33 event.event_scheduler -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:08:22.503 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:22.503 --rc genhtml_branch_coverage=1 00:08:22.503 --rc genhtml_function_coverage=1 00:08:22.503 --rc genhtml_legend=1 00:08:22.503 --rc geninfo_all_blocks=1 00:08:22.503 --rc geninfo_unexecuted_blocks=1 00:08:22.503 00:08:22.503 ' 00:08:22.503 18:06:33 event.event_scheduler -- scheduler/scheduler.sh@29 -- # rpc=rpc_cmd 00:08:22.503 18:06:33 event.event_scheduler -- scheduler/scheduler.sh@35 -- # scheduler_pid=59155 00:08:22.503 18:06:33 event.event_scheduler -- scheduler/scheduler.sh@34 -- # /home/vagrant/spdk_repo/spdk/test/event/scheduler/scheduler -m 0xF -p 0x2 --wait-for-rpc -f 00:08:22.503 18:06:33 event.event_scheduler -- scheduler/scheduler.sh@36 -- # trap 'killprocess $scheduler_pid; exit 1' SIGINT SIGTERM EXIT 00:08:22.503 18:06:33 event.event_scheduler -- scheduler/scheduler.sh@37 -- # waitforlisten 59155 00:08:22.503 18:06:33 event.event_scheduler -- common/autotest_common.sh@835 -- # '[' -z 59155 ']' 00:08:22.503 18:06:33 event.event_scheduler -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:08:22.503 18:06:33 event.event_scheduler -- common/autotest_common.sh@840 -- # local max_retries=100 00:08:22.504 18:06:33 event.event_scheduler -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:08:22.504 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:08:22.504 18:06:33 event.event_scheduler -- common/autotest_common.sh@844 -- # xtrace_disable 00:08:22.504 18:06:33 event.event_scheduler -- common/autotest_common.sh@10 -- # set +x 00:08:22.762 [2024-12-06 18:06:33.150074] Starting SPDK v25.01-pre git sha1 1148849d6 / DPDK 24.03.0 initialization... 00:08:22.762 [2024-12-06 18:06:33.150527] [ DPDK EAL parameters: scheduler --no-shconf -c 0xF --main-lcore=2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid59155 ] 00:08:22.762 [2024-12-06 18:06:33.332605] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:08:23.020 [2024-12-06 18:06:33.461758] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:08:23.020 [2024-12-06 18:06:33.461865] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:08:23.020 [2024-12-06 18:06:33.462009] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:08:23.020 [2024-12-06 18:06:33.462042] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:08:23.586 18:06:34 event.event_scheduler -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:08:23.586 18:06:34 event.event_scheduler -- common/autotest_common.sh@868 -- # return 0 00:08:23.586 18:06:34 event.event_scheduler -- scheduler/scheduler.sh@39 -- # rpc_cmd framework_set_scheduler dynamic 00:08:23.586 18:06:34 event.event_scheduler -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:23.586 18:06:34 event.event_scheduler -- common/autotest_common.sh@10 -- # set +x 00:08:23.586 POWER: failed to open /sys/devices/system/cpu/cpu%u/cpufreq/scaling_governor 00:08:23.586 POWER: Cannot set governor of lcore 0 to userspace 00:08:23.586 POWER: failed to open /sys/devices/system/cpu/cpu%u/cpufreq/scaling_governor 00:08:23.586 POWER: Cannot set governor of lcore 0 to performance 00:08:23.586 POWER: failed to open /sys/devices/system/cpu/cpu%u/cpufreq/scaling_governor 00:08:23.586 POWER: Cannot set governor of lcore 0 to userspace 00:08:23.586 POWER: failed to open /sys/devices/system/cpu/cpu%u/cpufreq/scaling_governor 00:08:23.586 POWER: Cannot set governor of lcore 0 to userspace 00:08:23.586 GUEST_CHANNEL: Opening channel '/dev/virtio-ports/virtio.serial.port.poweragent.0' for lcore 0 00:08:23.586 GUEST_CHANNEL: Unable to connect to '/dev/virtio-ports/virtio.serial.port.poweragent.0' with error No such file or directory 00:08:23.586 POWER: Unable to set Power Management Environment for lcore 0 00:08:23.586 [2024-12-06 18:06:34.039656] dpdk_governor.c: 135:_init_core: *ERROR*: Failed to initialize on core0 00:08:23.586 [2024-12-06 18:06:34.039709] dpdk_governor.c: 196:_init: *ERROR*: Failed to initialize on core0 00:08:23.586 [2024-12-06 18:06:34.039743] scheduler_dynamic.c: 280:init: *NOTICE*: Unable to initialize dpdk governor 00:08:23.586 [2024-12-06 18:06:34.039855] scheduler_dynamic.c: 427:set_opts: *NOTICE*: Setting scheduler load limit to 20 00:08:23.586 [2024-12-06 18:06:34.039898] scheduler_dynamic.c: 429:set_opts: *NOTICE*: Setting scheduler core limit to 80 00:08:23.586 [2024-12-06 18:06:34.039993] scheduler_dynamic.c: 431:set_opts: *NOTICE*: Setting scheduler core busy to 95 00:08:23.586 18:06:34 event.event_scheduler -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:23.586 18:06:34 event.event_scheduler -- scheduler/scheduler.sh@40 -- # rpc_cmd framework_start_init 00:08:23.587 18:06:34 event.event_scheduler -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:23.587 18:06:34 event.event_scheduler -- common/autotest_common.sh@10 -- # set +x 00:08:23.845 [2024-12-06 18:06:34.387127] scheduler.c: 382:test_start: *NOTICE*: Scheduler test application started. 00:08:23.845 18:06:34 event.event_scheduler -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:23.845 18:06:34 event.event_scheduler -- scheduler/scheduler.sh@43 -- # run_test scheduler_create_thread scheduler_create_thread 00:08:23.845 18:06:34 event.event_scheduler -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:08:23.845 18:06:34 event.event_scheduler -- common/autotest_common.sh@1111 -- # xtrace_disable 00:08:23.845 18:06:34 event.event_scheduler -- common/autotest_common.sh@10 -- # set +x 00:08:23.845 ************************************ 00:08:23.845 START TEST scheduler_create_thread 00:08:23.845 ************************************ 00:08:23.845 18:06:34 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@1129 -- # scheduler_create_thread 00:08:23.845 18:06:34 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@12 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n active_pinned -m 0x1 -a 100 00:08:23.845 18:06:34 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:23.845 18:06:34 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:08:24.105 2 00:08:24.105 18:06:34 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:24.105 18:06:34 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@13 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n active_pinned -m 0x2 -a 100 00:08:24.105 18:06:34 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:24.105 18:06:34 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:08:24.105 3 00:08:24.105 18:06:34 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:24.105 18:06:34 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@14 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n active_pinned -m 0x4 -a 100 00:08:24.105 18:06:34 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:24.105 18:06:34 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:08:24.105 4 00:08:24.105 18:06:34 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:24.105 18:06:34 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@15 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n active_pinned -m 0x8 -a 100 00:08:24.105 18:06:34 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:24.105 18:06:34 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:08:24.105 5 00:08:24.105 18:06:34 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:24.105 18:06:34 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@16 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n idle_pinned -m 0x1 -a 0 00:08:24.105 18:06:34 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:24.105 18:06:34 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:08:24.105 6 00:08:24.105 18:06:34 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:24.105 18:06:34 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@17 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n idle_pinned -m 0x2 -a 0 00:08:24.105 18:06:34 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:24.105 18:06:34 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:08:24.105 7 00:08:24.105 18:06:34 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:24.105 18:06:34 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@18 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n idle_pinned -m 0x4 -a 0 00:08:24.105 18:06:34 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:24.105 18:06:34 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:08:24.105 8 00:08:24.105 18:06:34 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:24.105 18:06:34 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@19 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n idle_pinned -m 0x8 -a 0 00:08:24.105 18:06:34 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:24.105 18:06:34 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:08:24.105 9 00:08:24.105 18:06:34 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:24.105 18:06:34 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@21 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n one_third_active -a 30 00:08:24.105 18:06:34 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:24.105 18:06:34 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:08:24.105 10 00:08:24.105 18:06:34 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:24.105 18:06:34 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@22 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n half_active -a 0 00:08:24.105 18:06:34 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:24.105 18:06:34 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:08:24.105 18:06:34 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:24.105 18:06:34 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@22 -- # thread_id=11 00:08:24.105 18:06:34 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@23 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_set_active 11 50 00:08:24.105 18:06:34 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:24.105 18:06:34 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:08:24.105 18:06:34 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:24.105 18:06:34 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@25 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n deleted -a 100 00:08:24.105 18:06:34 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:24.105 18:06:34 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:08:24.105 18:06:34 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:24.105 18:06:34 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@25 -- # thread_id=12 00:08:24.105 18:06:34 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@26 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_delete 12 00:08:24.105 18:06:34 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:24.105 18:06:34 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:08:25.074 ************************************ 00:08:25.074 END TEST scheduler_create_thread 00:08:25.074 ************************************ 00:08:25.074 18:06:35 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:25.074 00:08:25.074 real 0m1.180s 00:08:25.074 user 0m0.016s 00:08:25.074 sys 0m0.002s 00:08:25.074 18:06:35 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@1130 -- # xtrace_disable 00:08:25.074 18:06:35 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:08:25.332 18:06:35 event.event_scheduler -- scheduler/scheduler.sh@45 -- # trap - SIGINT SIGTERM EXIT 00:08:25.332 18:06:35 event.event_scheduler -- scheduler/scheduler.sh@46 -- # killprocess 59155 00:08:25.332 18:06:35 event.event_scheduler -- common/autotest_common.sh@954 -- # '[' -z 59155 ']' 00:08:25.332 18:06:35 event.event_scheduler -- common/autotest_common.sh@958 -- # kill -0 59155 00:08:25.332 18:06:35 event.event_scheduler -- common/autotest_common.sh@959 -- # uname 00:08:25.332 18:06:35 event.event_scheduler -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:08:25.332 18:06:35 event.event_scheduler -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 59155 00:08:25.332 killing process with pid 59155 00:08:25.332 18:06:35 event.event_scheduler -- common/autotest_common.sh@960 -- # process_name=reactor_2 00:08:25.332 18:06:35 event.event_scheduler -- common/autotest_common.sh@964 -- # '[' reactor_2 = sudo ']' 00:08:25.332 18:06:35 event.event_scheduler -- common/autotest_common.sh@972 -- # echo 'killing process with pid 59155' 00:08:25.332 18:06:35 event.event_scheduler -- common/autotest_common.sh@973 -- # kill 59155 00:08:25.332 18:06:35 event.event_scheduler -- common/autotest_common.sh@978 -- # wait 59155 00:08:25.589 [2024-12-06 18:06:36.064312] scheduler.c: 360:test_shutdown: *NOTICE*: Scheduler test application stopped. 00:08:26.967 00:08:26.967 real 0m4.429s 00:08:26.967 user 0m7.530s 00:08:26.967 sys 0m0.564s 00:08:26.967 ************************************ 00:08:26.967 END TEST event_scheduler 00:08:26.967 ************************************ 00:08:26.967 18:06:37 event.event_scheduler -- common/autotest_common.sh@1130 -- # xtrace_disable 00:08:26.967 18:06:37 event.event_scheduler -- common/autotest_common.sh@10 -- # set +x 00:08:26.967 18:06:37 event -- event/event.sh@51 -- # modprobe -n nbd 00:08:26.967 18:06:37 event -- event/event.sh@52 -- # run_test app_repeat app_repeat_test 00:08:26.967 18:06:37 event -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:08:26.967 18:06:37 event -- common/autotest_common.sh@1111 -- # xtrace_disable 00:08:26.967 18:06:37 event -- common/autotest_common.sh@10 -- # set +x 00:08:26.967 ************************************ 00:08:26.967 START TEST app_repeat 00:08:26.967 ************************************ 00:08:26.967 18:06:37 event.app_repeat -- common/autotest_common.sh@1129 -- # app_repeat_test 00:08:26.967 18:06:37 event.app_repeat -- event/event.sh@12 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:08:26.967 18:06:37 event.app_repeat -- event/event.sh@13 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:08:26.967 18:06:37 event.app_repeat -- event/event.sh@13 -- # local nbd_list 00:08:26.967 18:06:37 event.app_repeat -- event/event.sh@14 -- # bdev_list=('Malloc0' 'Malloc1') 00:08:26.967 18:06:37 event.app_repeat -- event/event.sh@14 -- # local bdev_list 00:08:26.967 18:06:37 event.app_repeat -- event/event.sh@15 -- # local repeat_times=4 00:08:26.967 18:06:37 event.app_repeat -- event/event.sh@17 -- # modprobe nbd 00:08:26.967 Process app_repeat pid: 59250 00:08:26.967 18:06:37 event.app_repeat -- event/event.sh@19 -- # repeat_pid=59250 00:08:26.967 18:06:37 event.app_repeat -- event/event.sh@18 -- # /home/vagrant/spdk_repo/spdk/test/event/app_repeat/app_repeat -r /var/tmp/spdk-nbd.sock -m 0x3 -t 4 00:08:26.967 18:06:37 event.app_repeat -- event/event.sh@20 -- # trap 'killprocess $repeat_pid; exit 1' SIGINT SIGTERM EXIT 00:08:26.967 18:06:37 event.app_repeat -- event/event.sh@21 -- # echo 'Process app_repeat pid: 59250' 00:08:26.967 18:06:37 event.app_repeat -- event/event.sh@23 -- # for i in {0..2} 00:08:26.967 spdk_app_start Round 0 00:08:26.967 18:06:37 event.app_repeat -- event/event.sh@24 -- # echo 'spdk_app_start Round 0' 00:08:26.967 18:06:37 event.app_repeat -- event/event.sh@25 -- # waitforlisten 59250 /var/tmp/spdk-nbd.sock 00:08:26.967 18:06:37 event.app_repeat -- common/autotest_common.sh@835 -- # '[' -z 59250 ']' 00:08:26.967 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:08:26.967 18:06:37 event.app_repeat -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:08:26.967 18:06:37 event.app_repeat -- common/autotest_common.sh@840 -- # local max_retries=100 00:08:26.967 18:06:37 event.app_repeat -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:08:26.967 18:06:37 event.app_repeat -- common/autotest_common.sh@844 -- # xtrace_disable 00:08:26.967 18:06:37 event.app_repeat -- common/autotest_common.sh@10 -- # set +x 00:08:26.967 [2024-12-06 18:06:37.398724] Starting SPDK v25.01-pre git sha1 1148849d6 / DPDK 24.03.0 initialization... 00:08:26.967 [2024-12-06 18:06:37.399056] [ DPDK EAL parameters: app_repeat --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid59250 ] 00:08:27.226 [2024-12-06 18:06:37.600931] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:08:27.226 [2024-12-06 18:06:37.722574] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:08:27.226 [2024-12-06 18:06:37.722605] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:08:27.794 18:06:38 event.app_repeat -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:08:27.794 18:06:38 event.app_repeat -- common/autotest_common.sh@868 -- # return 0 00:08:27.794 18:06:38 event.app_repeat -- event/event.sh@27 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:08:28.053 Malloc0 00:08:28.054 18:06:38 event.app_repeat -- event/event.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:08:28.312 Malloc1 00:08:28.572 18:06:38 event.app_repeat -- event/event.sh@30 -- # nbd_rpc_data_verify /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:08:28.572 18:06:38 event.app_repeat -- bdev/nbd_common.sh@90 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:08:28.572 18:06:38 event.app_repeat -- bdev/nbd_common.sh@91 -- # bdev_list=('Malloc0' 'Malloc1') 00:08:28.572 18:06:38 event.app_repeat -- bdev/nbd_common.sh@91 -- # local bdev_list 00:08:28.572 18:06:38 event.app_repeat -- bdev/nbd_common.sh@92 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:08:28.572 18:06:38 event.app_repeat -- bdev/nbd_common.sh@92 -- # local nbd_list 00:08:28.572 18:06:38 event.app_repeat -- bdev/nbd_common.sh@94 -- # nbd_start_disks /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:08:28.572 18:06:38 event.app_repeat -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:08:28.572 18:06:38 event.app_repeat -- bdev/nbd_common.sh@10 -- # bdev_list=('Malloc0' 'Malloc1') 00:08:28.572 18:06:38 event.app_repeat -- bdev/nbd_common.sh@10 -- # local bdev_list 00:08:28.572 18:06:38 event.app_repeat -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:08:28.572 18:06:38 event.app_repeat -- bdev/nbd_common.sh@11 -- # local nbd_list 00:08:28.572 18:06:38 event.app_repeat -- bdev/nbd_common.sh@12 -- # local i 00:08:28.572 18:06:38 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:08:28.572 18:06:38 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:08:28.572 18:06:38 event.app_repeat -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc0 /dev/nbd0 00:08:28.572 /dev/nbd0 00:08:28.572 18:06:39 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:08:28.572 18:06:39 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:08:28.572 18:06:39 event.app_repeat -- common/autotest_common.sh@872 -- # local nbd_name=nbd0 00:08:28.572 18:06:39 event.app_repeat -- common/autotest_common.sh@873 -- # local i 00:08:28.572 18:06:39 event.app_repeat -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:08:28.572 18:06:39 event.app_repeat -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:08:28.572 18:06:39 event.app_repeat -- common/autotest_common.sh@876 -- # grep -q -w nbd0 /proc/partitions 00:08:28.572 18:06:39 event.app_repeat -- common/autotest_common.sh@877 -- # break 00:08:28.572 18:06:39 event.app_repeat -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:08:28.572 18:06:39 event.app_repeat -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:08:28.572 18:06:39 event.app_repeat -- common/autotest_common.sh@889 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:08:28.572 1+0 records in 00:08:28.572 1+0 records out 00:08:28.572 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000403614 s, 10.1 MB/s 00:08:28.572 18:06:39 event.app_repeat -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:08:28.572 18:06:39 event.app_repeat -- common/autotest_common.sh@890 -- # size=4096 00:08:28.572 18:06:39 event.app_repeat -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:08:28.831 18:06:39 event.app_repeat -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:08:28.831 18:06:39 event.app_repeat -- common/autotest_common.sh@893 -- # return 0 00:08:28.831 18:06:39 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:08:28.831 18:06:39 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:08:28.831 18:06:39 event.app_repeat -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc1 /dev/nbd1 00:08:28.831 /dev/nbd1 00:08:28.831 18:06:39 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:08:28.831 18:06:39 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:08:28.831 18:06:39 event.app_repeat -- common/autotest_common.sh@872 -- # local nbd_name=nbd1 00:08:28.831 18:06:39 event.app_repeat -- common/autotest_common.sh@873 -- # local i 00:08:28.831 18:06:39 event.app_repeat -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:08:28.831 18:06:39 event.app_repeat -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:08:28.831 18:06:39 event.app_repeat -- common/autotest_common.sh@876 -- # grep -q -w nbd1 /proc/partitions 00:08:28.831 18:06:39 event.app_repeat -- common/autotest_common.sh@877 -- # break 00:08:28.831 18:06:39 event.app_repeat -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:08:28.831 18:06:39 event.app_repeat -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:08:28.831 18:06:39 event.app_repeat -- common/autotest_common.sh@889 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:08:28.831 1+0 records in 00:08:28.831 1+0 records out 00:08:28.831 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.00044461 s, 9.2 MB/s 00:08:28.831 18:06:39 event.app_repeat -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:08:28.831 18:06:39 event.app_repeat -- common/autotest_common.sh@890 -- # size=4096 00:08:28.831 18:06:39 event.app_repeat -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:08:29.090 18:06:39 event.app_repeat -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:08:29.090 18:06:39 event.app_repeat -- common/autotest_common.sh@893 -- # return 0 00:08:29.090 18:06:39 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:08:29.090 18:06:39 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:08:29.090 18:06:39 event.app_repeat -- bdev/nbd_common.sh@95 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:08:29.090 18:06:39 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:08:29.090 18:06:39 event.app_repeat -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:08:29.090 18:06:39 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[ 00:08:29.090 { 00:08:29.090 "nbd_device": "/dev/nbd0", 00:08:29.090 "bdev_name": "Malloc0" 00:08:29.090 }, 00:08:29.090 { 00:08:29.090 "nbd_device": "/dev/nbd1", 00:08:29.090 "bdev_name": "Malloc1" 00:08:29.090 } 00:08:29.090 ]' 00:08:29.091 18:06:39 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[ 00:08:29.091 { 00:08:29.091 "nbd_device": "/dev/nbd0", 00:08:29.091 "bdev_name": "Malloc0" 00:08:29.091 }, 00:08:29.091 { 00:08:29.091 "nbd_device": "/dev/nbd1", 00:08:29.091 "bdev_name": "Malloc1" 00:08:29.091 } 00:08:29.091 ]' 00:08:29.091 18:06:39 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:08:29.350 18:06:39 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name='/dev/nbd0 00:08:29.350 /dev/nbd1' 00:08:29.350 18:06:39 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '/dev/nbd0 00:08:29.350 /dev/nbd1' 00:08:29.350 18:06:39 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:08:29.350 18:06:39 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=2 00:08:29.350 18:06:39 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 2 00:08:29.350 18:06:39 event.app_repeat -- bdev/nbd_common.sh@95 -- # count=2 00:08:29.350 18:06:39 event.app_repeat -- bdev/nbd_common.sh@96 -- # '[' 2 -ne 2 ']' 00:08:29.350 18:06:39 event.app_repeat -- bdev/nbd_common.sh@100 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' write 00:08:29.350 18:06:39 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:08:29.350 18:06:39 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:08:29.350 18:06:39 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=write 00:08:29.350 18:06:39 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:08:29.350 18:06:39 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' write = write ']' 00:08:29.350 18:06:39 event.app_repeat -- bdev/nbd_common.sh@76 -- # dd if=/dev/urandom of=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest bs=4096 count=256 00:08:29.350 256+0 records in 00:08:29.350 256+0 records out 00:08:29.350 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0128042 s, 81.9 MB/s 00:08:29.350 18:06:39 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:08:29.350 18:06:39 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest of=/dev/nbd0 bs=4096 count=256 oflag=direct 00:08:29.350 256+0 records in 00:08:29.350 256+0 records out 00:08:29.350 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0307381 s, 34.1 MB/s 00:08:29.350 18:06:39 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:08:29.350 18:06:39 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest of=/dev/nbd1 bs=4096 count=256 oflag=direct 00:08:29.350 256+0 records in 00:08:29.350 256+0 records out 00:08:29.350 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0363773 s, 28.8 MB/s 00:08:29.350 18:06:39 event.app_repeat -- bdev/nbd_common.sh@101 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' verify 00:08:29.350 18:06:39 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:08:29.350 18:06:39 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:08:29.350 18:06:39 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=verify 00:08:29.350 18:06:39 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:08:29.350 18:06:39 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' verify = write ']' 00:08:29.351 18:06:39 event.app_repeat -- bdev/nbd_common.sh@80 -- # '[' verify = verify ']' 00:08:29.351 18:06:39 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:08:29.351 18:06:39 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest /dev/nbd0 00:08:29.351 18:06:39 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:08:29.351 18:06:39 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest /dev/nbd1 00:08:29.351 18:06:39 event.app_repeat -- bdev/nbd_common.sh@85 -- # rm /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:08:29.351 18:06:39 event.app_repeat -- bdev/nbd_common.sh@103 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1' 00:08:29.351 18:06:39 event.app_repeat -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:08:29.351 18:06:39 event.app_repeat -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:08:29.351 18:06:39 event.app_repeat -- bdev/nbd_common.sh@50 -- # local nbd_list 00:08:29.351 18:06:39 event.app_repeat -- bdev/nbd_common.sh@51 -- # local i 00:08:29.351 18:06:39 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:08:29.351 18:06:39 event.app_repeat -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:08:29.609 18:06:40 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:08:29.609 18:06:40 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:08:29.609 18:06:40 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:08:29.609 18:06:40 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:08:29.609 18:06:40 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:08:29.609 18:06:40 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:08:29.609 18:06:40 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:08:29.609 18:06:40 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:08:29.609 18:06:40 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:08:29.609 18:06:40 event.app_repeat -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:08:29.868 18:06:40 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:08:29.869 18:06:40 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:08:29.869 18:06:40 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:08:29.869 18:06:40 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:08:29.869 18:06:40 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:08:29.869 18:06:40 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:08:29.869 18:06:40 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:08:29.869 18:06:40 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:08:29.869 18:06:40 event.app_repeat -- bdev/nbd_common.sh@104 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:08:29.869 18:06:40 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:08:29.869 18:06:40 event.app_repeat -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:08:30.128 18:06:40 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:08:30.128 18:06:40 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[]' 00:08:30.128 18:06:40 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:08:30.128 18:06:40 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:08:30.128 18:06:40 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '' 00:08:30.128 18:06:40 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:08:30.128 18:06:40 event.app_repeat -- bdev/nbd_common.sh@65 -- # true 00:08:30.128 18:06:40 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=0 00:08:30.128 18:06:40 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 0 00:08:30.128 18:06:40 event.app_repeat -- bdev/nbd_common.sh@104 -- # count=0 00:08:30.128 18:06:40 event.app_repeat -- bdev/nbd_common.sh@105 -- # '[' 0 -ne 0 ']' 00:08:30.128 18:06:40 event.app_repeat -- bdev/nbd_common.sh@109 -- # return 0 00:08:30.128 18:06:40 event.app_repeat -- event/event.sh@34 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock spdk_kill_instance SIGTERM 00:08:30.724 18:06:41 event.app_repeat -- event/event.sh@35 -- # sleep 3 00:08:32.102 [2024-12-06 18:06:42.335954] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:08:32.102 [2024-12-06 18:06:42.457127] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:08:32.102 [2024-12-06 18:06:42.457128] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:08:32.102 [2024-12-06 18:06:42.668835] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_register' already registered. 00:08:32.102 [2024-12-06 18:06:42.668899] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_unregister' already registered. 00:08:34.009 spdk_app_start Round 1 00:08:34.010 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:08:34.010 18:06:44 event.app_repeat -- event/event.sh@23 -- # for i in {0..2} 00:08:34.010 18:06:44 event.app_repeat -- event/event.sh@24 -- # echo 'spdk_app_start Round 1' 00:08:34.010 18:06:44 event.app_repeat -- event/event.sh@25 -- # waitforlisten 59250 /var/tmp/spdk-nbd.sock 00:08:34.010 18:06:44 event.app_repeat -- common/autotest_common.sh@835 -- # '[' -z 59250 ']' 00:08:34.010 18:06:44 event.app_repeat -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:08:34.010 18:06:44 event.app_repeat -- common/autotest_common.sh@840 -- # local max_retries=100 00:08:34.010 18:06:44 event.app_repeat -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:08:34.010 18:06:44 event.app_repeat -- common/autotest_common.sh@844 -- # xtrace_disable 00:08:34.010 18:06:44 event.app_repeat -- common/autotest_common.sh@10 -- # set +x 00:08:34.010 18:06:44 event.app_repeat -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:08:34.010 18:06:44 event.app_repeat -- common/autotest_common.sh@868 -- # return 0 00:08:34.010 18:06:44 event.app_repeat -- event/event.sh@27 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:08:34.269 Malloc0 00:08:34.269 18:06:44 event.app_repeat -- event/event.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:08:34.528 Malloc1 00:08:34.528 18:06:44 event.app_repeat -- event/event.sh@30 -- # nbd_rpc_data_verify /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:08:34.528 18:06:44 event.app_repeat -- bdev/nbd_common.sh@90 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:08:34.528 18:06:44 event.app_repeat -- bdev/nbd_common.sh@91 -- # bdev_list=('Malloc0' 'Malloc1') 00:08:34.528 18:06:44 event.app_repeat -- bdev/nbd_common.sh@91 -- # local bdev_list 00:08:34.528 18:06:44 event.app_repeat -- bdev/nbd_common.sh@92 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:08:34.528 18:06:44 event.app_repeat -- bdev/nbd_common.sh@92 -- # local nbd_list 00:08:34.528 18:06:44 event.app_repeat -- bdev/nbd_common.sh@94 -- # nbd_start_disks /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:08:34.528 18:06:44 event.app_repeat -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:08:34.528 18:06:44 event.app_repeat -- bdev/nbd_common.sh@10 -- # bdev_list=('Malloc0' 'Malloc1') 00:08:34.528 18:06:44 event.app_repeat -- bdev/nbd_common.sh@10 -- # local bdev_list 00:08:34.528 18:06:44 event.app_repeat -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:08:34.528 18:06:44 event.app_repeat -- bdev/nbd_common.sh@11 -- # local nbd_list 00:08:34.528 18:06:44 event.app_repeat -- bdev/nbd_common.sh@12 -- # local i 00:08:34.528 18:06:44 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:08:34.528 18:06:44 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:08:34.528 18:06:44 event.app_repeat -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc0 /dev/nbd0 00:08:34.787 /dev/nbd0 00:08:34.787 18:06:45 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:08:34.787 18:06:45 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:08:34.787 18:06:45 event.app_repeat -- common/autotest_common.sh@872 -- # local nbd_name=nbd0 00:08:34.787 18:06:45 event.app_repeat -- common/autotest_common.sh@873 -- # local i 00:08:34.787 18:06:45 event.app_repeat -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:08:34.787 18:06:45 event.app_repeat -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:08:34.787 18:06:45 event.app_repeat -- common/autotest_common.sh@876 -- # grep -q -w nbd0 /proc/partitions 00:08:34.787 18:06:45 event.app_repeat -- common/autotest_common.sh@877 -- # break 00:08:34.787 18:06:45 event.app_repeat -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:08:34.787 18:06:45 event.app_repeat -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:08:34.787 18:06:45 event.app_repeat -- common/autotest_common.sh@889 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:08:34.787 1+0 records in 00:08:34.787 1+0 records out 00:08:34.787 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000364543 s, 11.2 MB/s 00:08:34.787 18:06:45 event.app_repeat -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:08:34.787 18:06:45 event.app_repeat -- common/autotest_common.sh@890 -- # size=4096 00:08:34.787 18:06:45 event.app_repeat -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:08:34.787 18:06:45 event.app_repeat -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:08:34.787 18:06:45 event.app_repeat -- common/autotest_common.sh@893 -- # return 0 00:08:34.787 18:06:45 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:08:34.787 18:06:45 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:08:34.787 18:06:45 event.app_repeat -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc1 /dev/nbd1 00:08:35.045 /dev/nbd1 00:08:35.045 18:06:45 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:08:35.045 18:06:45 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:08:35.045 18:06:45 event.app_repeat -- common/autotest_common.sh@872 -- # local nbd_name=nbd1 00:08:35.045 18:06:45 event.app_repeat -- common/autotest_common.sh@873 -- # local i 00:08:35.045 18:06:45 event.app_repeat -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:08:35.045 18:06:45 event.app_repeat -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:08:35.045 18:06:45 event.app_repeat -- common/autotest_common.sh@876 -- # grep -q -w nbd1 /proc/partitions 00:08:35.045 18:06:45 event.app_repeat -- common/autotest_common.sh@877 -- # break 00:08:35.045 18:06:45 event.app_repeat -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:08:35.045 18:06:45 event.app_repeat -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:08:35.045 18:06:45 event.app_repeat -- common/autotest_common.sh@889 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:08:35.045 1+0 records in 00:08:35.045 1+0 records out 00:08:35.045 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000403525 s, 10.2 MB/s 00:08:35.045 18:06:45 event.app_repeat -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:08:35.045 18:06:45 event.app_repeat -- common/autotest_common.sh@890 -- # size=4096 00:08:35.045 18:06:45 event.app_repeat -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:08:35.045 18:06:45 event.app_repeat -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:08:35.045 18:06:45 event.app_repeat -- common/autotest_common.sh@893 -- # return 0 00:08:35.045 18:06:45 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:08:35.045 18:06:45 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:08:35.045 18:06:45 event.app_repeat -- bdev/nbd_common.sh@95 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:08:35.045 18:06:45 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:08:35.045 18:06:45 event.app_repeat -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:08:35.304 18:06:45 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[ 00:08:35.304 { 00:08:35.304 "nbd_device": "/dev/nbd0", 00:08:35.304 "bdev_name": "Malloc0" 00:08:35.304 }, 00:08:35.304 { 00:08:35.304 "nbd_device": "/dev/nbd1", 00:08:35.304 "bdev_name": "Malloc1" 00:08:35.304 } 00:08:35.304 ]' 00:08:35.304 18:06:45 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[ 00:08:35.304 { 00:08:35.304 "nbd_device": "/dev/nbd0", 00:08:35.304 "bdev_name": "Malloc0" 00:08:35.304 }, 00:08:35.304 { 00:08:35.304 "nbd_device": "/dev/nbd1", 00:08:35.304 "bdev_name": "Malloc1" 00:08:35.304 } 00:08:35.304 ]' 00:08:35.304 18:06:45 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:08:35.304 18:06:45 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name='/dev/nbd0 00:08:35.304 /dev/nbd1' 00:08:35.304 18:06:45 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:08:35.304 18:06:45 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '/dev/nbd0 00:08:35.304 /dev/nbd1' 00:08:35.304 18:06:45 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=2 00:08:35.304 18:06:45 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 2 00:08:35.304 18:06:45 event.app_repeat -- bdev/nbd_common.sh@95 -- # count=2 00:08:35.304 18:06:45 event.app_repeat -- bdev/nbd_common.sh@96 -- # '[' 2 -ne 2 ']' 00:08:35.304 18:06:45 event.app_repeat -- bdev/nbd_common.sh@100 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' write 00:08:35.304 18:06:45 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:08:35.304 18:06:45 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:08:35.304 18:06:45 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=write 00:08:35.304 18:06:45 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:08:35.304 18:06:45 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' write = write ']' 00:08:35.304 18:06:45 event.app_repeat -- bdev/nbd_common.sh@76 -- # dd if=/dev/urandom of=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest bs=4096 count=256 00:08:35.304 256+0 records in 00:08:35.304 256+0 records out 00:08:35.304 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0114322 s, 91.7 MB/s 00:08:35.304 18:06:45 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:08:35.304 18:06:45 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest of=/dev/nbd0 bs=4096 count=256 oflag=direct 00:08:35.304 256+0 records in 00:08:35.304 256+0 records out 00:08:35.304 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0281167 s, 37.3 MB/s 00:08:35.304 18:06:45 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:08:35.304 18:06:45 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest of=/dev/nbd1 bs=4096 count=256 oflag=direct 00:08:35.304 256+0 records in 00:08:35.304 256+0 records out 00:08:35.304 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0326537 s, 32.1 MB/s 00:08:35.304 18:06:45 event.app_repeat -- bdev/nbd_common.sh@101 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' verify 00:08:35.304 18:06:45 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:08:35.304 18:06:45 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:08:35.304 18:06:45 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=verify 00:08:35.304 18:06:45 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:08:35.304 18:06:45 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' verify = write ']' 00:08:35.304 18:06:45 event.app_repeat -- bdev/nbd_common.sh@80 -- # '[' verify = verify ']' 00:08:35.304 18:06:45 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:08:35.304 18:06:45 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest /dev/nbd0 00:08:35.304 18:06:45 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:08:35.304 18:06:45 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest /dev/nbd1 00:08:35.304 18:06:45 event.app_repeat -- bdev/nbd_common.sh@85 -- # rm /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:08:35.304 18:06:45 event.app_repeat -- bdev/nbd_common.sh@103 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1' 00:08:35.304 18:06:45 event.app_repeat -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:08:35.304 18:06:45 event.app_repeat -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:08:35.304 18:06:45 event.app_repeat -- bdev/nbd_common.sh@50 -- # local nbd_list 00:08:35.304 18:06:45 event.app_repeat -- bdev/nbd_common.sh@51 -- # local i 00:08:35.304 18:06:45 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:08:35.304 18:06:45 event.app_repeat -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:08:35.563 18:06:46 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:08:35.563 18:06:46 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:08:35.563 18:06:46 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:08:35.563 18:06:46 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:08:35.563 18:06:46 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:08:35.563 18:06:46 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:08:35.563 18:06:46 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:08:35.563 18:06:46 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:08:35.564 18:06:46 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:08:35.564 18:06:46 event.app_repeat -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:08:35.909 18:06:46 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:08:35.909 18:06:46 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:08:35.909 18:06:46 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:08:35.909 18:06:46 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:08:35.909 18:06:46 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:08:35.909 18:06:46 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:08:35.909 18:06:46 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:08:35.909 18:06:46 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:08:35.909 18:06:46 event.app_repeat -- bdev/nbd_common.sh@104 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:08:35.909 18:06:46 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:08:35.909 18:06:46 event.app_repeat -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:08:36.168 18:06:46 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:08:36.168 18:06:46 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[]' 00:08:36.168 18:06:46 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:08:36.168 18:06:46 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:08:36.168 18:06:46 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '' 00:08:36.168 18:06:46 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:08:36.168 18:06:46 event.app_repeat -- bdev/nbd_common.sh@65 -- # true 00:08:36.168 18:06:46 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=0 00:08:36.168 18:06:46 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 0 00:08:36.168 18:06:46 event.app_repeat -- bdev/nbd_common.sh@104 -- # count=0 00:08:36.168 18:06:46 event.app_repeat -- bdev/nbd_common.sh@105 -- # '[' 0 -ne 0 ']' 00:08:36.168 18:06:46 event.app_repeat -- bdev/nbd_common.sh@109 -- # return 0 00:08:36.168 18:06:46 event.app_repeat -- event/event.sh@34 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock spdk_kill_instance SIGTERM 00:08:36.738 18:06:47 event.app_repeat -- event/event.sh@35 -- # sleep 3 00:08:37.674 [2024-12-06 18:06:48.241825] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:08:37.934 [2024-12-06 18:06:48.358015] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:08:37.934 [2024-12-06 18:06:48.358044] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:08:38.192 [2024-12-06 18:06:48.557609] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_register' already registered. 00:08:38.192 [2024-12-06 18:06:48.557692] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_unregister' already registered. 00:08:39.570 spdk_app_start Round 2 00:08:39.570 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:08:39.570 18:06:50 event.app_repeat -- event/event.sh@23 -- # for i in {0..2} 00:08:39.570 18:06:50 event.app_repeat -- event/event.sh@24 -- # echo 'spdk_app_start Round 2' 00:08:39.570 18:06:50 event.app_repeat -- event/event.sh@25 -- # waitforlisten 59250 /var/tmp/spdk-nbd.sock 00:08:39.570 18:06:50 event.app_repeat -- common/autotest_common.sh@835 -- # '[' -z 59250 ']' 00:08:39.570 18:06:50 event.app_repeat -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:08:39.570 18:06:50 event.app_repeat -- common/autotest_common.sh@840 -- # local max_retries=100 00:08:39.570 18:06:50 event.app_repeat -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:08:39.570 18:06:50 event.app_repeat -- common/autotest_common.sh@844 -- # xtrace_disable 00:08:39.570 18:06:50 event.app_repeat -- common/autotest_common.sh@10 -- # set +x 00:08:39.829 18:06:50 event.app_repeat -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:08:39.829 18:06:50 event.app_repeat -- common/autotest_common.sh@868 -- # return 0 00:08:39.829 18:06:50 event.app_repeat -- event/event.sh@27 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:08:40.088 Malloc0 00:08:40.088 18:06:50 event.app_repeat -- event/event.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:08:40.346 Malloc1 00:08:40.346 18:06:50 event.app_repeat -- event/event.sh@30 -- # nbd_rpc_data_verify /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:08:40.346 18:06:50 event.app_repeat -- bdev/nbd_common.sh@90 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:08:40.346 18:06:50 event.app_repeat -- bdev/nbd_common.sh@91 -- # bdev_list=('Malloc0' 'Malloc1') 00:08:40.346 18:06:50 event.app_repeat -- bdev/nbd_common.sh@91 -- # local bdev_list 00:08:40.346 18:06:50 event.app_repeat -- bdev/nbd_common.sh@92 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:08:40.346 18:06:50 event.app_repeat -- bdev/nbd_common.sh@92 -- # local nbd_list 00:08:40.346 18:06:50 event.app_repeat -- bdev/nbd_common.sh@94 -- # nbd_start_disks /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:08:40.346 18:06:50 event.app_repeat -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:08:40.346 18:06:50 event.app_repeat -- bdev/nbd_common.sh@10 -- # bdev_list=('Malloc0' 'Malloc1') 00:08:40.347 18:06:50 event.app_repeat -- bdev/nbd_common.sh@10 -- # local bdev_list 00:08:40.347 18:06:50 event.app_repeat -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:08:40.347 18:06:50 event.app_repeat -- bdev/nbd_common.sh@11 -- # local nbd_list 00:08:40.347 18:06:50 event.app_repeat -- bdev/nbd_common.sh@12 -- # local i 00:08:40.347 18:06:50 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:08:40.347 18:06:50 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:08:40.347 18:06:50 event.app_repeat -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc0 /dev/nbd0 00:08:40.607 /dev/nbd0 00:08:40.607 18:06:51 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:08:40.607 18:06:51 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:08:40.607 18:06:51 event.app_repeat -- common/autotest_common.sh@872 -- # local nbd_name=nbd0 00:08:40.607 18:06:51 event.app_repeat -- common/autotest_common.sh@873 -- # local i 00:08:40.607 18:06:51 event.app_repeat -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:08:40.607 18:06:51 event.app_repeat -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:08:40.607 18:06:51 event.app_repeat -- common/autotest_common.sh@876 -- # grep -q -w nbd0 /proc/partitions 00:08:40.607 18:06:51 event.app_repeat -- common/autotest_common.sh@877 -- # break 00:08:40.607 18:06:51 event.app_repeat -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:08:40.607 18:06:51 event.app_repeat -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:08:40.607 18:06:51 event.app_repeat -- common/autotest_common.sh@889 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:08:40.607 1+0 records in 00:08:40.607 1+0 records out 00:08:40.607 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000383846 s, 10.7 MB/s 00:08:40.607 18:06:51 event.app_repeat -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:08:40.607 18:06:51 event.app_repeat -- common/autotest_common.sh@890 -- # size=4096 00:08:40.607 18:06:51 event.app_repeat -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:08:40.607 18:06:51 event.app_repeat -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:08:40.607 18:06:51 event.app_repeat -- common/autotest_common.sh@893 -- # return 0 00:08:40.607 18:06:51 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:08:40.607 18:06:51 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:08:40.607 18:06:51 event.app_repeat -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc1 /dev/nbd1 00:08:40.866 /dev/nbd1 00:08:40.866 18:06:51 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:08:40.867 18:06:51 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:08:40.867 18:06:51 event.app_repeat -- common/autotest_common.sh@872 -- # local nbd_name=nbd1 00:08:40.867 18:06:51 event.app_repeat -- common/autotest_common.sh@873 -- # local i 00:08:40.867 18:06:51 event.app_repeat -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:08:40.867 18:06:51 event.app_repeat -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:08:40.867 18:06:51 event.app_repeat -- common/autotest_common.sh@876 -- # grep -q -w nbd1 /proc/partitions 00:08:40.867 18:06:51 event.app_repeat -- common/autotest_common.sh@877 -- # break 00:08:40.867 18:06:51 event.app_repeat -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:08:40.867 18:06:51 event.app_repeat -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:08:40.867 18:06:51 event.app_repeat -- common/autotest_common.sh@889 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:08:40.867 1+0 records in 00:08:40.867 1+0 records out 00:08:40.867 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000394253 s, 10.4 MB/s 00:08:40.867 18:06:51 event.app_repeat -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:08:40.867 18:06:51 event.app_repeat -- common/autotest_common.sh@890 -- # size=4096 00:08:40.867 18:06:51 event.app_repeat -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:08:40.867 18:06:51 event.app_repeat -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:08:40.867 18:06:51 event.app_repeat -- common/autotest_common.sh@893 -- # return 0 00:08:40.867 18:06:51 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:08:40.867 18:06:51 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:08:40.867 18:06:51 event.app_repeat -- bdev/nbd_common.sh@95 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:08:40.867 18:06:51 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:08:40.867 18:06:51 event.app_repeat -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:08:41.125 18:06:51 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[ 00:08:41.125 { 00:08:41.125 "nbd_device": "/dev/nbd0", 00:08:41.125 "bdev_name": "Malloc0" 00:08:41.125 }, 00:08:41.125 { 00:08:41.125 "nbd_device": "/dev/nbd1", 00:08:41.125 "bdev_name": "Malloc1" 00:08:41.125 } 00:08:41.125 ]' 00:08:41.125 18:06:51 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[ 00:08:41.125 { 00:08:41.125 "nbd_device": "/dev/nbd0", 00:08:41.125 "bdev_name": "Malloc0" 00:08:41.125 }, 00:08:41.125 { 00:08:41.125 "nbd_device": "/dev/nbd1", 00:08:41.125 "bdev_name": "Malloc1" 00:08:41.125 } 00:08:41.125 ]' 00:08:41.125 18:06:51 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:08:41.125 18:06:51 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name='/dev/nbd0 00:08:41.125 /dev/nbd1' 00:08:41.125 18:06:51 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '/dev/nbd0 00:08:41.125 /dev/nbd1' 00:08:41.125 18:06:51 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:08:41.125 18:06:51 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=2 00:08:41.125 18:06:51 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 2 00:08:41.125 18:06:51 event.app_repeat -- bdev/nbd_common.sh@95 -- # count=2 00:08:41.125 18:06:51 event.app_repeat -- bdev/nbd_common.sh@96 -- # '[' 2 -ne 2 ']' 00:08:41.126 18:06:51 event.app_repeat -- bdev/nbd_common.sh@100 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' write 00:08:41.126 18:06:51 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:08:41.126 18:06:51 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:08:41.126 18:06:51 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=write 00:08:41.126 18:06:51 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:08:41.126 18:06:51 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' write = write ']' 00:08:41.126 18:06:51 event.app_repeat -- bdev/nbd_common.sh@76 -- # dd if=/dev/urandom of=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest bs=4096 count=256 00:08:41.126 256+0 records in 00:08:41.126 256+0 records out 00:08:41.126 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00532998 s, 197 MB/s 00:08:41.385 18:06:51 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:08:41.385 18:06:51 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest of=/dev/nbd0 bs=4096 count=256 oflag=direct 00:08:41.385 256+0 records in 00:08:41.385 256+0 records out 00:08:41.385 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0268017 s, 39.1 MB/s 00:08:41.385 18:06:51 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:08:41.385 18:06:51 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest of=/dev/nbd1 bs=4096 count=256 oflag=direct 00:08:41.385 256+0 records in 00:08:41.385 256+0 records out 00:08:41.385 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0314644 s, 33.3 MB/s 00:08:41.385 18:06:51 event.app_repeat -- bdev/nbd_common.sh@101 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' verify 00:08:41.385 18:06:51 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:08:41.385 18:06:51 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:08:41.385 18:06:51 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=verify 00:08:41.385 18:06:51 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:08:41.385 18:06:51 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' verify = write ']' 00:08:41.385 18:06:51 event.app_repeat -- bdev/nbd_common.sh@80 -- # '[' verify = verify ']' 00:08:41.385 18:06:51 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:08:41.385 18:06:51 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest /dev/nbd0 00:08:41.385 18:06:51 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:08:41.385 18:06:51 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest /dev/nbd1 00:08:41.385 18:06:51 event.app_repeat -- bdev/nbd_common.sh@85 -- # rm /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:08:41.385 18:06:51 event.app_repeat -- bdev/nbd_common.sh@103 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1' 00:08:41.385 18:06:51 event.app_repeat -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:08:41.385 18:06:51 event.app_repeat -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:08:41.385 18:06:51 event.app_repeat -- bdev/nbd_common.sh@50 -- # local nbd_list 00:08:41.385 18:06:51 event.app_repeat -- bdev/nbd_common.sh@51 -- # local i 00:08:41.385 18:06:51 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:08:41.385 18:06:51 event.app_repeat -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:08:41.644 18:06:52 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:08:41.644 18:06:52 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:08:41.645 18:06:52 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:08:41.645 18:06:52 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:08:41.645 18:06:52 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:08:41.645 18:06:52 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:08:41.645 18:06:52 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:08:41.645 18:06:52 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:08:41.645 18:06:52 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:08:41.645 18:06:52 event.app_repeat -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:08:41.903 18:06:52 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:08:41.903 18:06:52 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:08:41.903 18:06:52 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:08:41.903 18:06:52 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:08:41.903 18:06:52 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:08:41.903 18:06:52 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:08:41.903 18:06:52 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:08:41.903 18:06:52 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:08:41.903 18:06:52 event.app_repeat -- bdev/nbd_common.sh@104 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:08:41.903 18:06:52 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:08:41.903 18:06:52 event.app_repeat -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:08:42.160 18:06:52 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:08:42.160 18:06:52 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[]' 00:08:42.160 18:06:52 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:08:42.160 18:06:52 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:08:42.160 18:06:52 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '' 00:08:42.161 18:06:52 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:08:42.161 18:06:52 event.app_repeat -- bdev/nbd_common.sh@65 -- # true 00:08:42.161 18:06:52 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=0 00:08:42.161 18:06:52 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 0 00:08:42.161 18:06:52 event.app_repeat -- bdev/nbd_common.sh@104 -- # count=0 00:08:42.161 18:06:52 event.app_repeat -- bdev/nbd_common.sh@105 -- # '[' 0 -ne 0 ']' 00:08:42.161 18:06:52 event.app_repeat -- bdev/nbd_common.sh@109 -- # return 0 00:08:42.161 18:06:52 event.app_repeat -- event/event.sh@34 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock spdk_kill_instance SIGTERM 00:08:42.726 18:06:53 event.app_repeat -- event/event.sh@35 -- # sleep 3 00:08:44.100 [2024-12-06 18:06:54.328757] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:08:44.100 [2024-12-06 18:06:54.441909] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:08:44.100 [2024-12-06 18:06:54.441909] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:08:44.100 [2024-12-06 18:06:54.630958] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_register' already registered. 00:08:44.100 [2024-12-06 18:06:54.631048] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_unregister' already registered. 00:08:46.001 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:08:46.001 18:06:56 event.app_repeat -- event/event.sh@38 -- # waitforlisten 59250 /var/tmp/spdk-nbd.sock 00:08:46.001 18:06:56 event.app_repeat -- common/autotest_common.sh@835 -- # '[' -z 59250 ']' 00:08:46.001 18:06:56 event.app_repeat -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:08:46.001 18:06:56 event.app_repeat -- common/autotest_common.sh@840 -- # local max_retries=100 00:08:46.001 18:06:56 event.app_repeat -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:08:46.001 18:06:56 event.app_repeat -- common/autotest_common.sh@844 -- # xtrace_disable 00:08:46.001 18:06:56 event.app_repeat -- common/autotest_common.sh@10 -- # set +x 00:08:46.001 18:06:56 event.app_repeat -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:08:46.001 18:06:56 event.app_repeat -- common/autotest_common.sh@868 -- # return 0 00:08:46.001 18:06:56 event.app_repeat -- event/event.sh@39 -- # killprocess 59250 00:08:46.001 18:06:56 event.app_repeat -- common/autotest_common.sh@954 -- # '[' -z 59250 ']' 00:08:46.001 18:06:56 event.app_repeat -- common/autotest_common.sh@958 -- # kill -0 59250 00:08:46.001 18:06:56 event.app_repeat -- common/autotest_common.sh@959 -- # uname 00:08:46.001 18:06:56 event.app_repeat -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:08:46.001 18:06:56 event.app_repeat -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 59250 00:08:46.001 killing process with pid 59250 00:08:46.001 18:06:56 event.app_repeat -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:08:46.001 18:06:56 event.app_repeat -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:08:46.001 18:06:56 event.app_repeat -- common/autotest_common.sh@972 -- # echo 'killing process with pid 59250' 00:08:46.001 18:06:56 event.app_repeat -- common/autotest_common.sh@973 -- # kill 59250 00:08:46.001 18:06:56 event.app_repeat -- common/autotest_common.sh@978 -- # wait 59250 00:08:47.017 spdk_app_start is called in Round 0. 00:08:47.017 Shutdown signal received, stop current app iteration 00:08:47.017 Starting SPDK v25.01-pre git sha1 1148849d6 / DPDK 24.03.0 reinitialization... 00:08:47.017 spdk_app_start is called in Round 1. 00:08:47.017 Shutdown signal received, stop current app iteration 00:08:47.017 Starting SPDK v25.01-pre git sha1 1148849d6 / DPDK 24.03.0 reinitialization... 00:08:47.017 spdk_app_start is called in Round 2. 00:08:47.017 Shutdown signal received, stop current app iteration 00:08:47.017 Starting SPDK v25.01-pre git sha1 1148849d6 / DPDK 24.03.0 reinitialization... 00:08:47.017 spdk_app_start is called in Round 3. 00:08:47.017 Shutdown signal received, stop current app iteration 00:08:47.017 18:06:57 event.app_repeat -- event/event.sh@40 -- # trap - SIGINT SIGTERM EXIT 00:08:47.017 18:06:57 event.app_repeat -- event/event.sh@42 -- # return 0 00:08:47.017 00:08:47.017 real 0m20.164s 00:08:47.017 user 0m43.152s 00:08:47.017 sys 0m3.274s 00:08:47.017 18:06:57 event.app_repeat -- common/autotest_common.sh@1130 -- # xtrace_disable 00:08:47.017 ************************************ 00:08:47.017 END TEST app_repeat 00:08:47.017 ************************************ 00:08:47.017 18:06:57 event.app_repeat -- common/autotest_common.sh@10 -- # set +x 00:08:47.017 18:06:57 event -- event/event.sh@54 -- # (( SPDK_TEST_CRYPTO == 0 )) 00:08:47.017 18:06:57 event -- event/event.sh@55 -- # run_test cpu_locks /home/vagrant/spdk_repo/spdk/test/event/cpu_locks.sh 00:08:47.017 18:06:57 event -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:08:47.017 18:06:57 event -- common/autotest_common.sh@1111 -- # xtrace_disable 00:08:47.017 18:06:57 event -- common/autotest_common.sh@10 -- # set +x 00:08:47.017 ************************************ 00:08:47.017 START TEST cpu_locks 00:08:47.017 ************************************ 00:08:47.017 18:06:57 event.cpu_locks -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/event/cpu_locks.sh 00:08:47.276 * Looking for test storage... 00:08:47.276 * Found test storage at /home/vagrant/spdk_repo/spdk/test/event 00:08:47.276 18:06:57 event.cpu_locks -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:08:47.276 18:06:57 event.cpu_locks -- common/autotest_common.sh@1711 -- # lcov --version 00:08:47.276 18:06:57 event.cpu_locks -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:08:47.276 18:06:57 event.cpu_locks -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:08:47.276 18:06:57 event.cpu_locks -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:08:47.276 18:06:57 event.cpu_locks -- scripts/common.sh@333 -- # local ver1 ver1_l 00:08:47.276 18:06:57 event.cpu_locks -- scripts/common.sh@334 -- # local ver2 ver2_l 00:08:47.276 18:06:57 event.cpu_locks -- scripts/common.sh@336 -- # IFS=.-: 00:08:47.276 18:06:57 event.cpu_locks -- scripts/common.sh@336 -- # read -ra ver1 00:08:47.276 18:06:57 event.cpu_locks -- scripts/common.sh@337 -- # IFS=.-: 00:08:47.276 18:06:57 event.cpu_locks -- scripts/common.sh@337 -- # read -ra ver2 00:08:47.276 18:06:57 event.cpu_locks -- scripts/common.sh@338 -- # local 'op=<' 00:08:47.276 18:06:57 event.cpu_locks -- scripts/common.sh@340 -- # ver1_l=2 00:08:47.276 18:06:57 event.cpu_locks -- scripts/common.sh@341 -- # ver2_l=1 00:08:47.276 18:06:57 event.cpu_locks -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:08:47.276 18:06:57 event.cpu_locks -- scripts/common.sh@344 -- # case "$op" in 00:08:47.276 18:06:57 event.cpu_locks -- scripts/common.sh@345 -- # : 1 00:08:47.276 18:06:57 event.cpu_locks -- scripts/common.sh@364 -- # (( v = 0 )) 00:08:47.276 18:06:57 event.cpu_locks -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:08:47.276 18:06:57 event.cpu_locks -- scripts/common.sh@365 -- # decimal 1 00:08:47.276 18:06:57 event.cpu_locks -- scripts/common.sh@353 -- # local d=1 00:08:47.276 18:06:57 event.cpu_locks -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:08:47.276 18:06:57 event.cpu_locks -- scripts/common.sh@355 -- # echo 1 00:08:47.276 18:06:57 event.cpu_locks -- scripts/common.sh@365 -- # ver1[v]=1 00:08:47.276 18:06:57 event.cpu_locks -- scripts/common.sh@366 -- # decimal 2 00:08:47.276 18:06:57 event.cpu_locks -- scripts/common.sh@353 -- # local d=2 00:08:47.276 18:06:57 event.cpu_locks -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:08:47.276 18:06:57 event.cpu_locks -- scripts/common.sh@355 -- # echo 2 00:08:47.276 18:06:57 event.cpu_locks -- scripts/common.sh@366 -- # ver2[v]=2 00:08:47.277 18:06:57 event.cpu_locks -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:08:47.277 18:06:57 event.cpu_locks -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:08:47.277 18:06:57 event.cpu_locks -- scripts/common.sh@368 -- # return 0 00:08:47.277 18:06:57 event.cpu_locks -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:08:47.277 18:06:57 event.cpu_locks -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:08:47.277 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:47.277 --rc genhtml_branch_coverage=1 00:08:47.277 --rc genhtml_function_coverage=1 00:08:47.277 --rc genhtml_legend=1 00:08:47.277 --rc geninfo_all_blocks=1 00:08:47.277 --rc geninfo_unexecuted_blocks=1 00:08:47.277 00:08:47.277 ' 00:08:47.277 18:06:57 event.cpu_locks -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:08:47.277 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:47.277 --rc genhtml_branch_coverage=1 00:08:47.277 --rc genhtml_function_coverage=1 00:08:47.277 --rc genhtml_legend=1 00:08:47.277 --rc geninfo_all_blocks=1 00:08:47.277 --rc geninfo_unexecuted_blocks=1 00:08:47.277 00:08:47.277 ' 00:08:47.277 18:06:57 event.cpu_locks -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:08:47.277 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:47.277 --rc genhtml_branch_coverage=1 00:08:47.277 --rc genhtml_function_coverage=1 00:08:47.277 --rc genhtml_legend=1 00:08:47.277 --rc geninfo_all_blocks=1 00:08:47.277 --rc geninfo_unexecuted_blocks=1 00:08:47.277 00:08:47.277 ' 00:08:47.277 18:06:57 event.cpu_locks -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:08:47.277 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:47.277 --rc genhtml_branch_coverage=1 00:08:47.277 --rc genhtml_function_coverage=1 00:08:47.277 --rc genhtml_legend=1 00:08:47.277 --rc geninfo_all_blocks=1 00:08:47.277 --rc geninfo_unexecuted_blocks=1 00:08:47.277 00:08:47.277 ' 00:08:47.277 18:06:57 event.cpu_locks -- event/cpu_locks.sh@11 -- # rpc_sock1=/var/tmp/spdk.sock 00:08:47.277 18:06:57 event.cpu_locks -- event/cpu_locks.sh@12 -- # rpc_sock2=/var/tmp/spdk2.sock 00:08:47.277 18:06:57 event.cpu_locks -- event/cpu_locks.sh@164 -- # trap cleanup EXIT SIGTERM SIGINT 00:08:47.277 18:06:57 event.cpu_locks -- event/cpu_locks.sh@166 -- # run_test default_locks default_locks 00:08:47.277 18:06:57 event.cpu_locks -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:08:47.277 18:06:57 event.cpu_locks -- common/autotest_common.sh@1111 -- # xtrace_disable 00:08:47.277 18:06:57 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:08:47.277 ************************************ 00:08:47.277 START TEST default_locks 00:08:47.277 ************************************ 00:08:47.277 18:06:57 event.cpu_locks.default_locks -- common/autotest_common.sh@1129 -- # default_locks 00:08:47.277 18:06:57 event.cpu_locks.default_locks -- event/cpu_locks.sh@46 -- # spdk_tgt_pid=59710 00:08:47.277 18:06:57 event.cpu_locks.default_locks -- event/cpu_locks.sh@45 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 00:08:47.277 18:06:57 event.cpu_locks.default_locks -- event/cpu_locks.sh@47 -- # waitforlisten 59710 00:08:47.277 18:06:57 event.cpu_locks.default_locks -- common/autotest_common.sh@835 -- # '[' -z 59710 ']' 00:08:47.277 18:06:57 event.cpu_locks.default_locks -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:08:47.277 18:06:57 event.cpu_locks.default_locks -- common/autotest_common.sh@840 -- # local max_retries=100 00:08:47.277 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:08:47.277 18:06:57 event.cpu_locks.default_locks -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:08:47.277 18:06:57 event.cpu_locks.default_locks -- common/autotest_common.sh@844 -- # xtrace_disable 00:08:47.277 18:06:57 event.cpu_locks.default_locks -- common/autotest_common.sh@10 -- # set +x 00:08:47.536 [2024-12-06 18:06:57.899937] Starting SPDK v25.01-pre git sha1 1148849d6 / DPDK 24.03.0 initialization... 00:08:47.536 [2024-12-06 18:06:57.900061] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid59710 ] 00:08:47.536 [2024-12-06 18:06:58.082702] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:47.794 [2024-12-06 18:06:58.197577] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:08:48.731 18:06:59 event.cpu_locks.default_locks -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:08:48.731 18:06:59 event.cpu_locks.default_locks -- common/autotest_common.sh@868 -- # return 0 00:08:48.731 18:06:59 event.cpu_locks.default_locks -- event/cpu_locks.sh@49 -- # locks_exist 59710 00:08:48.731 18:06:59 event.cpu_locks.default_locks -- event/cpu_locks.sh@22 -- # lslocks -p 59710 00:08:48.731 18:06:59 event.cpu_locks.default_locks -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:08:49.299 18:06:59 event.cpu_locks.default_locks -- event/cpu_locks.sh@50 -- # killprocess 59710 00:08:49.299 18:06:59 event.cpu_locks.default_locks -- common/autotest_common.sh@954 -- # '[' -z 59710 ']' 00:08:49.299 18:06:59 event.cpu_locks.default_locks -- common/autotest_common.sh@958 -- # kill -0 59710 00:08:49.299 18:06:59 event.cpu_locks.default_locks -- common/autotest_common.sh@959 -- # uname 00:08:49.299 18:06:59 event.cpu_locks.default_locks -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:08:49.299 18:06:59 event.cpu_locks.default_locks -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 59710 00:08:49.299 18:06:59 event.cpu_locks.default_locks -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:08:49.299 18:06:59 event.cpu_locks.default_locks -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:08:49.299 killing process with pid 59710 00:08:49.299 18:06:59 event.cpu_locks.default_locks -- common/autotest_common.sh@972 -- # echo 'killing process with pid 59710' 00:08:49.299 18:06:59 event.cpu_locks.default_locks -- common/autotest_common.sh@973 -- # kill 59710 00:08:49.299 18:06:59 event.cpu_locks.default_locks -- common/autotest_common.sh@978 -- # wait 59710 00:08:51.855 18:07:02 event.cpu_locks.default_locks -- event/cpu_locks.sh@52 -- # NOT waitforlisten 59710 00:08:51.855 18:07:02 event.cpu_locks.default_locks -- common/autotest_common.sh@652 -- # local es=0 00:08:51.855 18:07:02 event.cpu_locks.default_locks -- common/autotest_common.sh@654 -- # valid_exec_arg waitforlisten 59710 00:08:51.855 18:07:02 event.cpu_locks.default_locks -- common/autotest_common.sh@640 -- # local arg=waitforlisten 00:08:51.855 18:07:02 event.cpu_locks.default_locks -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:08:51.855 18:07:02 event.cpu_locks.default_locks -- common/autotest_common.sh@644 -- # type -t waitforlisten 00:08:51.855 18:07:02 event.cpu_locks.default_locks -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:08:51.855 18:07:02 event.cpu_locks.default_locks -- common/autotest_common.sh@655 -- # waitforlisten 59710 00:08:51.855 18:07:02 event.cpu_locks.default_locks -- common/autotest_common.sh@835 -- # '[' -z 59710 ']' 00:08:51.855 18:07:02 event.cpu_locks.default_locks -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:08:51.855 18:07:02 event.cpu_locks.default_locks -- common/autotest_common.sh@840 -- # local max_retries=100 00:08:51.855 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:08:51.855 18:07:02 event.cpu_locks.default_locks -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:08:51.855 18:07:02 event.cpu_locks.default_locks -- common/autotest_common.sh@844 -- # xtrace_disable 00:08:51.855 18:07:02 event.cpu_locks.default_locks -- common/autotest_common.sh@10 -- # set +x 00:08:51.855 ERROR: process (pid: 59710) is no longer running 00:08:51.855 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 850: kill: (59710) - No such process 00:08:51.855 18:07:02 event.cpu_locks.default_locks -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:08:51.855 18:07:02 event.cpu_locks.default_locks -- common/autotest_common.sh@868 -- # return 1 00:08:51.855 18:07:02 event.cpu_locks.default_locks -- common/autotest_common.sh@655 -- # es=1 00:08:51.855 18:07:02 event.cpu_locks.default_locks -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:08:51.855 18:07:02 event.cpu_locks.default_locks -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:08:51.855 18:07:02 event.cpu_locks.default_locks -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:08:51.855 18:07:02 event.cpu_locks.default_locks -- event/cpu_locks.sh@54 -- # no_locks 00:08:51.855 18:07:02 event.cpu_locks.default_locks -- event/cpu_locks.sh@26 -- # lock_files=() 00:08:51.855 18:07:02 event.cpu_locks.default_locks -- event/cpu_locks.sh@26 -- # local lock_files 00:08:51.855 18:07:02 event.cpu_locks.default_locks -- event/cpu_locks.sh@27 -- # (( 0 != 0 )) 00:08:51.855 00:08:51.855 real 0m4.300s 00:08:51.855 user 0m4.331s 00:08:51.855 sys 0m0.691s 00:08:51.855 18:07:02 event.cpu_locks.default_locks -- common/autotest_common.sh@1130 -- # xtrace_disable 00:08:51.855 ************************************ 00:08:51.855 18:07:02 event.cpu_locks.default_locks -- common/autotest_common.sh@10 -- # set +x 00:08:51.855 END TEST default_locks 00:08:51.855 ************************************ 00:08:51.855 18:07:02 event.cpu_locks -- event/cpu_locks.sh@167 -- # run_test default_locks_via_rpc default_locks_via_rpc 00:08:51.855 18:07:02 event.cpu_locks -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:08:51.855 18:07:02 event.cpu_locks -- common/autotest_common.sh@1111 -- # xtrace_disable 00:08:51.855 18:07:02 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:08:51.855 ************************************ 00:08:51.855 START TEST default_locks_via_rpc 00:08:51.855 ************************************ 00:08:51.855 18:07:02 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@1129 -- # default_locks_via_rpc 00:08:51.855 18:07:02 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@62 -- # spdk_tgt_pid=59785 00:08:51.855 18:07:02 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@61 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 00:08:51.855 18:07:02 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@63 -- # waitforlisten 59785 00:08:51.855 18:07:02 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@835 -- # '[' -z 59785 ']' 00:08:51.855 18:07:02 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:08:51.855 18:07:02 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@840 -- # local max_retries=100 00:08:51.855 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:08:51.855 18:07:02 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:08:51.855 18:07:02 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@844 -- # xtrace_disable 00:08:51.855 18:07:02 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:08:51.855 [2024-12-06 18:07:02.270867] Starting SPDK v25.01-pre git sha1 1148849d6 / DPDK 24.03.0 initialization... 00:08:51.855 [2024-12-06 18:07:02.271005] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid59785 ] 00:08:52.114 [2024-12-06 18:07:02.451702] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:52.114 [2024-12-06 18:07:02.569326] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:08:53.049 18:07:03 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:08:53.049 18:07:03 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@868 -- # return 0 00:08:53.049 18:07:03 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@65 -- # rpc_cmd framework_disable_cpumask_locks 00:08:53.049 18:07:03 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:53.049 18:07:03 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:08:53.049 18:07:03 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:53.049 18:07:03 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@67 -- # no_locks 00:08:53.049 18:07:03 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@26 -- # lock_files=() 00:08:53.049 18:07:03 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@26 -- # local lock_files 00:08:53.049 18:07:03 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@27 -- # (( 0 != 0 )) 00:08:53.049 18:07:03 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@69 -- # rpc_cmd framework_enable_cpumask_locks 00:08:53.049 18:07:03 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:53.049 18:07:03 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:08:53.049 18:07:03 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:53.049 18:07:03 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@71 -- # locks_exist 59785 00:08:53.049 18:07:03 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:08:53.049 18:07:03 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@22 -- # lslocks -p 59785 00:08:53.679 18:07:03 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@73 -- # killprocess 59785 00:08:53.680 18:07:03 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@954 -- # '[' -z 59785 ']' 00:08:53.680 18:07:03 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@958 -- # kill -0 59785 00:08:53.680 18:07:03 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@959 -- # uname 00:08:53.680 18:07:03 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:08:53.680 18:07:03 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 59785 00:08:53.680 18:07:03 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:08:53.680 18:07:03 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:08:53.680 killing process with pid 59785 00:08:53.680 18:07:03 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@972 -- # echo 'killing process with pid 59785' 00:08:53.680 18:07:03 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@973 -- # kill 59785 00:08:53.680 18:07:03 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@978 -- # wait 59785 00:08:56.211 00:08:56.211 real 0m4.201s 00:08:56.211 user 0m4.167s 00:08:56.211 sys 0m0.659s 00:08:56.211 18:07:06 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@1130 -- # xtrace_disable 00:08:56.211 18:07:06 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:08:56.211 ************************************ 00:08:56.211 END TEST default_locks_via_rpc 00:08:56.211 ************************************ 00:08:56.211 18:07:06 event.cpu_locks -- event/cpu_locks.sh@168 -- # run_test non_locking_app_on_locked_coremask non_locking_app_on_locked_coremask 00:08:56.211 18:07:06 event.cpu_locks -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:08:56.211 18:07:06 event.cpu_locks -- common/autotest_common.sh@1111 -- # xtrace_disable 00:08:56.211 18:07:06 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:08:56.211 ************************************ 00:08:56.211 START TEST non_locking_app_on_locked_coremask 00:08:56.211 ************************************ 00:08:56.211 18:07:06 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@1129 -- # non_locking_app_on_locked_coremask 00:08:56.211 18:07:06 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@80 -- # spdk_tgt_pid=59861 00:08:56.211 18:07:06 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@79 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 00:08:56.211 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:08:56.211 18:07:06 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@81 -- # waitforlisten 59861 /var/tmp/spdk.sock 00:08:56.211 18:07:06 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@835 -- # '[' -z 59861 ']' 00:08:56.211 18:07:06 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:08:56.211 18:07:06 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@840 -- # local max_retries=100 00:08:56.211 18:07:06 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:08:56.211 18:07:06 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@844 -- # xtrace_disable 00:08:56.211 18:07:06 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:08:56.211 [2024-12-06 18:07:06.545834] Starting SPDK v25.01-pre git sha1 1148849d6 / DPDK 24.03.0 initialization... 00:08:56.211 [2024-12-06 18:07:06.545961] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid59861 ] 00:08:56.211 [2024-12-06 18:07:06.729421] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:56.471 [2024-12-06 18:07:06.844365] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:08:57.408 18:07:07 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:08:57.408 18:07:07 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@868 -- # return 0 00:08:57.408 18:07:07 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@83 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 --disable-cpumask-locks -r /var/tmp/spdk2.sock 00:08:57.408 18:07:07 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@84 -- # spdk_tgt_pid2=59881 00:08:57.408 18:07:07 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@85 -- # waitforlisten 59881 /var/tmp/spdk2.sock 00:08:57.408 18:07:07 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@835 -- # '[' -z 59881 ']' 00:08:57.408 18:07:07 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk2.sock 00:08:57.408 18:07:07 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@840 -- # local max_retries=100 00:08:57.408 18:07:07 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:08:57.408 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:08:57.408 18:07:07 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@844 -- # xtrace_disable 00:08:57.408 18:07:07 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:08:57.408 [2024-12-06 18:07:07.836528] Starting SPDK v25.01-pre git sha1 1148849d6 / DPDK 24.03.0 initialization... 00:08:57.408 [2024-12-06 18:07:07.837071] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid59881 ] 00:08:57.667 [2024-12-06 18:07:08.021481] app.c: 916:spdk_app_start: *NOTICE*: CPU core locks deactivated. 00:08:57.667 [2024-12-06 18:07:08.021566] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:57.925 [2024-12-06 18:07:08.258374] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:09:00.457 18:07:10 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:09:00.457 18:07:10 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@868 -- # return 0 00:09:00.457 18:07:10 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@87 -- # locks_exist 59861 00:09:00.457 18:07:10 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:09:00.457 18:07:10 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@22 -- # lslocks -p 59861 00:09:01.030 18:07:11 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@89 -- # killprocess 59861 00:09:01.030 18:07:11 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@954 -- # '[' -z 59861 ']' 00:09:01.030 18:07:11 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@958 -- # kill -0 59861 00:09:01.030 18:07:11 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@959 -- # uname 00:09:01.030 18:07:11 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:09:01.030 18:07:11 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 59861 00:09:01.030 18:07:11 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:09:01.030 killing process with pid 59861 00:09:01.030 18:07:11 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:09:01.030 18:07:11 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@972 -- # echo 'killing process with pid 59861' 00:09:01.030 18:07:11 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@973 -- # kill 59861 00:09:01.030 18:07:11 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@978 -- # wait 59861 00:09:06.314 18:07:16 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@90 -- # killprocess 59881 00:09:06.314 18:07:16 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@954 -- # '[' -z 59881 ']' 00:09:06.314 18:07:16 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@958 -- # kill -0 59881 00:09:06.314 18:07:16 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@959 -- # uname 00:09:06.314 18:07:16 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:09:06.314 18:07:16 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 59881 00:09:06.314 killing process with pid 59881 00:09:06.314 18:07:16 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:09:06.314 18:07:16 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:09:06.314 18:07:16 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@972 -- # echo 'killing process with pid 59881' 00:09:06.314 18:07:16 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@973 -- # kill 59881 00:09:06.314 18:07:16 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@978 -- # wait 59881 00:09:08.303 00:09:08.303 real 0m12.385s 00:09:08.303 user 0m12.731s 00:09:08.303 sys 0m1.505s 00:09:08.303 18:07:18 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@1130 -- # xtrace_disable 00:09:08.303 ************************************ 00:09:08.303 END TEST non_locking_app_on_locked_coremask 00:09:08.303 ************************************ 00:09:08.303 18:07:18 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:09:08.562 18:07:18 event.cpu_locks -- event/cpu_locks.sh@169 -- # run_test locking_app_on_unlocked_coremask locking_app_on_unlocked_coremask 00:09:08.562 18:07:18 event.cpu_locks -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:09:08.562 18:07:18 event.cpu_locks -- common/autotest_common.sh@1111 -- # xtrace_disable 00:09:08.562 18:07:18 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:09:08.562 ************************************ 00:09:08.562 START TEST locking_app_on_unlocked_coremask 00:09:08.562 ************************************ 00:09:08.562 18:07:18 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@1129 -- # locking_app_on_unlocked_coremask 00:09:08.562 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:09:08.562 18:07:18 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@98 -- # spdk_tgt_pid=60043 00:09:08.562 18:07:18 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@99 -- # waitforlisten 60043 /var/tmp/spdk.sock 00:09:08.562 18:07:18 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@97 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 --disable-cpumask-locks 00:09:08.562 18:07:18 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@835 -- # '[' -z 60043 ']' 00:09:08.562 18:07:18 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:09:08.562 18:07:18 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@840 -- # local max_retries=100 00:09:08.562 18:07:18 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:09:08.562 18:07:18 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@844 -- # xtrace_disable 00:09:08.562 18:07:18 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@10 -- # set +x 00:09:08.562 [2024-12-06 18:07:19.011223] Starting SPDK v25.01-pre git sha1 1148849d6 / DPDK 24.03.0 initialization... 00:09:08.562 [2024-12-06 18:07:19.011618] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid60043 ] 00:09:08.822 [2024-12-06 18:07:19.195805] app.c: 916:spdk_app_start: *NOTICE*: CPU core locks deactivated. 00:09:08.822 [2024-12-06 18:07:19.195865] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:08.822 [2024-12-06 18:07:19.322456] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:09:09.760 18:07:20 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:09:09.760 18:07:20 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@868 -- # return 0 00:09:09.760 18:07:20 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@102 -- # spdk_tgt_pid2=60059 00:09:09.760 18:07:20 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@103 -- # waitforlisten 60059 /var/tmp/spdk2.sock 00:09:09.760 18:07:20 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@101 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 -r /var/tmp/spdk2.sock 00:09:09.760 18:07:20 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@835 -- # '[' -z 60059 ']' 00:09:09.760 18:07:20 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk2.sock 00:09:09.760 18:07:20 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@840 -- # local max_retries=100 00:09:09.760 18:07:20 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:09:09.760 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:09:09.760 18:07:20 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@844 -- # xtrace_disable 00:09:09.760 18:07:20 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@10 -- # set +x 00:09:10.019 [2024-12-06 18:07:20.341528] Starting SPDK v25.01-pre git sha1 1148849d6 / DPDK 24.03.0 initialization... 00:09:10.019 [2024-12-06 18:07:20.342457] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid60059 ] 00:09:10.019 [2024-12-06 18:07:20.543160] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:10.298 [2024-12-06 18:07:20.817096] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:09:12.828 18:07:23 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:09:12.828 18:07:23 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@868 -- # return 0 00:09:12.828 18:07:23 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@105 -- # locks_exist 60059 00:09:12.828 18:07:23 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@22 -- # lslocks -p 60059 00:09:12.828 18:07:23 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:09:13.397 18:07:23 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@107 -- # killprocess 60043 00:09:13.397 18:07:23 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@954 -- # '[' -z 60043 ']' 00:09:13.397 18:07:23 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@958 -- # kill -0 60043 00:09:13.397 18:07:23 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@959 -- # uname 00:09:13.397 18:07:23 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:09:13.397 18:07:23 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 60043 00:09:13.656 18:07:24 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:09:13.656 18:07:24 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:09:13.656 killing process with pid 60043 00:09:13.656 18:07:24 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@972 -- # echo 'killing process with pid 60043' 00:09:13.656 18:07:24 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@973 -- # kill 60043 00:09:13.656 18:07:24 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@978 -- # wait 60043 00:09:18.968 18:07:28 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@108 -- # killprocess 60059 00:09:18.968 18:07:28 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@954 -- # '[' -z 60059 ']' 00:09:18.968 18:07:28 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@958 -- # kill -0 60059 00:09:18.968 18:07:28 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@959 -- # uname 00:09:18.968 18:07:28 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:09:18.968 18:07:28 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 60059 00:09:18.968 killing process with pid 60059 00:09:18.968 18:07:28 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:09:18.968 18:07:28 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:09:18.968 18:07:28 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@972 -- # echo 'killing process with pid 60059' 00:09:18.968 18:07:28 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@973 -- # kill 60059 00:09:18.968 18:07:28 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@978 -- # wait 60059 00:09:20.874 ************************************ 00:09:20.874 END TEST locking_app_on_unlocked_coremask 00:09:20.874 ************************************ 00:09:20.874 00:09:20.874 real 0m12.413s 00:09:20.874 user 0m12.949s 00:09:20.874 sys 0m1.460s 00:09:20.874 18:07:31 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@1130 -- # xtrace_disable 00:09:20.874 18:07:31 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@10 -- # set +x 00:09:20.874 18:07:31 event.cpu_locks -- event/cpu_locks.sh@170 -- # run_test locking_app_on_locked_coremask locking_app_on_locked_coremask 00:09:20.874 18:07:31 event.cpu_locks -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:09:20.874 18:07:31 event.cpu_locks -- common/autotest_common.sh@1111 -- # xtrace_disable 00:09:20.874 18:07:31 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:09:20.874 ************************************ 00:09:20.874 START TEST locking_app_on_locked_coremask 00:09:20.874 ************************************ 00:09:20.874 18:07:31 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@1129 -- # locking_app_on_locked_coremask 00:09:20.874 18:07:31 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@114 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 00:09:20.874 18:07:31 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@115 -- # spdk_tgt_pid=60213 00:09:20.874 18:07:31 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@116 -- # waitforlisten 60213 /var/tmp/spdk.sock 00:09:20.874 18:07:31 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@835 -- # '[' -z 60213 ']' 00:09:20.874 18:07:31 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:09:20.874 18:07:31 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@840 -- # local max_retries=100 00:09:20.874 18:07:31 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:09:20.874 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:09:20.874 18:07:31 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@844 -- # xtrace_disable 00:09:20.874 18:07:31 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:09:21.134 [2024-12-06 18:07:31.476631] Starting SPDK v25.01-pre git sha1 1148849d6 / DPDK 24.03.0 initialization... 00:09:21.134 [2024-12-06 18:07:31.476761] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid60213 ] 00:09:21.134 [2024-12-06 18:07:31.651581] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:21.394 [2024-12-06 18:07:31.810616] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:09:22.333 18:07:32 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:09:22.333 18:07:32 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@868 -- # return 0 00:09:22.334 18:07:32 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@119 -- # spdk_tgt_pid2=60234 00:09:22.334 18:07:32 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@120 -- # NOT waitforlisten 60234 /var/tmp/spdk2.sock 00:09:22.334 18:07:32 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@118 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 -r /var/tmp/spdk2.sock 00:09:22.334 18:07:32 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@652 -- # local es=0 00:09:22.334 18:07:32 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@654 -- # valid_exec_arg waitforlisten 60234 /var/tmp/spdk2.sock 00:09:22.334 18:07:32 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@640 -- # local arg=waitforlisten 00:09:22.334 18:07:32 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:09:22.334 18:07:32 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@644 -- # type -t waitforlisten 00:09:22.334 18:07:32 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:09:22.334 18:07:32 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@655 -- # waitforlisten 60234 /var/tmp/spdk2.sock 00:09:22.334 18:07:32 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@835 -- # '[' -z 60234 ']' 00:09:22.334 18:07:32 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk2.sock 00:09:22.334 18:07:32 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@840 -- # local max_retries=100 00:09:22.334 18:07:32 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:09:22.334 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:09:22.334 18:07:32 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@844 -- # xtrace_disable 00:09:22.334 18:07:32 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:09:22.334 [2024-12-06 18:07:32.763782] Starting SPDK v25.01-pre git sha1 1148849d6 / DPDK 24.03.0 initialization... 00:09:22.334 [2024-12-06 18:07:32.764096] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid60234 ] 00:09:22.593 [2024-12-06 18:07:32.944870] app.c: 781:claim_cpu_cores: *ERROR*: Cannot create lock on core 0, probably process 60213 has claimed it. 00:09:22.593 [2024-12-06 18:07:32.944932] app.c: 912:spdk_app_start: *ERROR*: Unable to acquire lock on assigned core mask - exiting. 00:09:22.853 ERROR: process (pid: 60234) is no longer running 00:09:22.853 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 850: kill: (60234) - No such process 00:09:22.853 18:07:33 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:09:22.853 18:07:33 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@868 -- # return 1 00:09:22.853 18:07:33 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@655 -- # es=1 00:09:22.853 18:07:33 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:09:22.853 18:07:33 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:09:22.853 18:07:33 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:09:22.853 18:07:33 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@122 -- # locks_exist 60213 00:09:22.853 18:07:33 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:09:22.853 18:07:33 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@22 -- # lslocks -p 60213 00:09:23.443 18:07:33 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@124 -- # killprocess 60213 00:09:23.443 18:07:33 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@954 -- # '[' -z 60213 ']' 00:09:23.444 18:07:33 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@958 -- # kill -0 60213 00:09:23.444 18:07:33 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@959 -- # uname 00:09:23.444 18:07:33 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:09:23.444 18:07:33 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 60213 00:09:23.444 killing process with pid 60213 00:09:23.444 18:07:33 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:09:23.444 18:07:33 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:09:23.444 18:07:33 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@972 -- # echo 'killing process with pid 60213' 00:09:23.444 18:07:33 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@973 -- # kill 60213 00:09:23.444 18:07:33 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@978 -- # wait 60213 00:09:25.973 00:09:25.973 real 0m5.150s 00:09:25.973 user 0m5.358s 00:09:25.973 sys 0m0.872s 00:09:25.973 18:07:36 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@1130 -- # xtrace_disable 00:09:25.973 ************************************ 00:09:25.973 END TEST locking_app_on_locked_coremask 00:09:25.973 ************************************ 00:09:25.973 18:07:36 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:09:26.232 18:07:36 event.cpu_locks -- event/cpu_locks.sh@171 -- # run_test locking_overlapped_coremask locking_overlapped_coremask 00:09:26.232 18:07:36 event.cpu_locks -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:09:26.232 18:07:36 event.cpu_locks -- common/autotest_common.sh@1111 -- # xtrace_disable 00:09:26.232 18:07:36 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:09:26.232 ************************************ 00:09:26.232 START TEST locking_overlapped_coremask 00:09:26.232 ************************************ 00:09:26.232 18:07:36 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@1129 -- # locking_overlapped_coremask 00:09:26.232 18:07:36 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@132 -- # spdk_tgt_pid=60304 00:09:26.232 18:07:36 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@133 -- # waitforlisten 60304 /var/tmp/spdk.sock 00:09:26.232 18:07:36 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@835 -- # '[' -z 60304 ']' 00:09:26.232 18:07:36 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:09:26.232 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:09:26.232 18:07:36 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@840 -- # local max_retries=100 00:09:26.232 18:07:36 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:09:26.232 18:07:36 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@131 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x7 00:09:26.232 18:07:36 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@844 -- # xtrace_disable 00:09:26.232 18:07:36 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@10 -- # set +x 00:09:26.232 [2024-12-06 18:07:36.682136] Starting SPDK v25.01-pre git sha1 1148849d6 / DPDK 24.03.0 initialization... 00:09:26.232 [2024-12-06 18:07:36.682347] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x7 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid60304 ] 00:09:26.491 [2024-12-06 18:07:36.866505] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:09:26.491 [2024-12-06 18:07:37.034864] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:09:26.491 [2024-12-06 18:07:37.034928] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:09:26.491 [2024-12-06 18:07:37.034931] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:09:27.425 18:07:37 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:09:27.425 18:07:37 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@868 -- # return 0 00:09:27.425 18:07:37 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@136 -- # spdk_tgt_pid2=60327 00:09:27.425 18:07:37 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@135 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1c -r /var/tmp/spdk2.sock 00:09:27.425 18:07:37 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@137 -- # NOT waitforlisten 60327 /var/tmp/spdk2.sock 00:09:27.683 18:07:37 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@652 -- # local es=0 00:09:27.683 18:07:38 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@654 -- # valid_exec_arg waitforlisten 60327 /var/tmp/spdk2.sock 00:09:27.683 18:07:38 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@640 -- # local arg=waitforlisten 00:09:27.683 18:07:38 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:09:27.683 18:07:38 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@644 -- # type -t waitforlisten 00:09:27.683 18:07:38 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:09:27.683 18:07:38 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@655 -- # waitforlisten 60327 /var/tmp/spdk2.sock 00:09:27.683 18:07:38 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@835 -- # '[' -z 60327 ']' 00:09:27.683 18:07:38 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk2.sock 00:09:27.683 18:07:38 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@840 -- # local max_retries=100 00:09:27.683 18:07:38 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:09:27.683 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:09:27.683 18:07:38 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@844 -- # xtrace_disable 00:09:27.683 18:07:38 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@10 -- # set +x 00:09:27.683 [2024-12-06 18:07:38.104338] Starting SPDK v25.01-pre git sha1 1148849d6 / DPDK 24.03.0 initialization... 00:09:27.683 [2024-12-06 18:07:38.105059] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1c --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid60327 ] 00:09:27.953 [2024-12-06 18:07:38.292789] app.c: 781:claim_cpu_cores: *ERROR*: Cannot create lock on core 2, probably process 60304 has claimed it. 00:09:27.953 [2024-12-06 18:07:38.292884] app.c: 912:spdk_app_start: *ERROR*: Unable to acquire lock on assigned core mask - exiting. 00:09:28.252 ERROR: process (pid: 60327) is no longer running 00:09:28.252 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 850: kill: (60327) - No such process 00:09:28.252 18:07:38 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:09:28.252 18:07:38 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@868 -- # return 1 00:09:28.252 18:07:38 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@655 -- # es=1 00:09:28.252 18:07:38 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:09:28.252 18:07:38 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:09:28.252 18:07:38 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:09:28.252 18:07:38 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@139 -- # check_remaining_locks 00:09:28.252 18:07:38 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@36 -- # locks=(/var/tmp/spdk_cpu_lock_*) 00:09:28.252 18:07:38 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@37 -- # locks_expected=(/var/tmp/spdk_cpu_lock_{000..002}) 00:09:28.252 18:07:38 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@38 -- # [[ /var/tmp/spdk_cpu_lock_000 /var/tmp/spdk_cpu_lock_001 /var/tmp/spdk_cpu_lock_002 == \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\0\ \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\1\ \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\2 ]] 00:09:28.252 18:07:38 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@141 -- # killprocess 60304 00:09:28.252 18:07:38 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@954 -- # '[' -z 60304 ']' 00:09:28.252 18:07:38 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@958 -- # kill -0 60304 00:09:28.252 18:07:38 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@959 -- # uname 00:09:28.252 18:07:38 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:09:28.252 18:07:38 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 60304 00:09:28.252 18:07:38 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:09:28.252 18:07:38 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:09:28.252 18:07:38 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@972 -- # echo 'killing process with pid 60304' 00:09:28.252 killing process with pid 60304 00:09:28.252 18:07:38 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@973 -- # kill 60304 00:09:28.252 18:07:38 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@978 -- # wait 60304 00:09:30.782 ************************************ 00:09:30.782 END TEST locking_overlapped_coremask 00:09:30.782 ************************************ 00:09:30.782 00:09:30.782 real 0m4.648s 00:09:30.782 user 0m12.614s 00:09:30.782 sys 0m0.640s 00:09:30.782 18:07:41 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@1130 -- # xtrace_disable 00:09:30.782 18:07:41 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@10 -- # set +x 00:09:30.782 18:07:41 event.cpu_locks -- event/cpu_locks.sh@172 -- # run_test locking_overlapped_coremask_via_rpc locking_overlapped_coremask_via_rpc 00:09:30.782 18:07:41 event.cpu_locks -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:09:30.782 18:07:41 event.cpu_locks -- common/autotest_common.sh@1111 -- # xtrace_disable 00:09:30.782 18:07:41 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:09:30.782 ************************************ 00:09:30.782 START TEST locking_overlapped_coremask_via_rpc 00:09:30.782 ************************************ 00:09:30.782 18:07:41 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@1129 -- # locking_overlapped_coremask_via_rpc 00:09:30.782 18:07:41 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@148 -- # spdk_tgt_pid=60391 00:09:30.782 18:07:41 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@149 -- # waitforlisten 60391 /var/tmp/spdk.sock 00:09:30.782 18:07:41 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@147 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x7 --disable-cpumask-locks 00:09:30.782 18:07:41 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@835 -- # '[' -z 60391 ']' 00:09:30.782 18:07:41 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:09:30.782 18:07:41 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@840 -- # local max_retries=100 00:09:30.782 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:09:30.782 18:07:41 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:09:30.782 18:07:41 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@844 -- # xtrace_disable 00:09:30.782 18:07:41 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:09:31.040 [2024-12-06 18:07:41.391906] Starting SPDK v25.01-pre git sha1 1148849d6 / DPDK 24.03.0 initialization... 00:09:31.040 [2024-12-06 18:07:41.392208] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x7 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid60391 ] 00:09:31.040 [2024-12-06 18:07:41.576940] app.c: 916:spdk_app_start: *NOTICE*: CPU core locks deactivated. 00:09:31.040 [2024-12-06 18:07:41.577184] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:09:31.298 [2024-12-06 18:07:41.699486] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:09:31.298 [2024-12-06 18:07:41.699644] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:09:31.298 [2024-12-06 18:07:41.699678] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:09:32.231 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:09:32.231 18:07:42 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:09:32.231 18:07:42 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@868 -- # return 0 00:09:32.231 18:07:42 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@151 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1c -r /var/tmp/spdk2.sock --disable-cpumask-locks 00:09:32.231 18:07:42 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@152 -- # spdk_tgt_pid2=60415 00:09:32.231 18:07:42 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@153 -- # waitforlisten 60415 /var/tmp/spdk2.sock 00:09:32.231 18:07:42 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@835 -- # '[' -z 60415 ']' 00:09:32.231 18:07:42 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk2.sock 00:09:32.231 18:07:42 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@840 -- # local max_retries=100 00:09:32.231 18:07:42 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:09:32.231 18:07:42 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@844 -- # xtrace_disable 00:09:32.231 18:07:42 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:09:32.231 [2024-12-06 18:07:42.669585] Starting SPDK v25.01-pre git sha1 1148849d6 / DPDK 24.03.0 initialization... 00:09:32.231 [2024-12-06 18:07:42.670395] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1c --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid60415 ] 00:09:32.488 [2024-12-06 18:07:42.855775] app.c: 916:spdk_app_start: *NOTICE*: CPU core locks deactivated. 00:09:32.488 [2024-12-06 18:07:42.855834] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:09:32.745 [2024-12-06 18:07:43.089903] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:09:32.745 [2024-12-06 18:07:43.093388] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:09:32.745 [2024-12-06 18:07:43.093416] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 4 00:09:35.290 18:07:45 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:09:35.290 18:07:45 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@868 -- # return 0 00:09:35.290 18:07:45 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@155 -- # rpc_cmd framework_enable_cpumask_locks 00:09:35.290 18:07:45 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:35.290 18:07:45 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:09:35.290 18:07:45 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:35.290 18:07:45 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@156 -- # NOT rpc_cmd -s /var/tmp/spdk2.sock framework_enable_cpumask_locks 00:09:35.290 18:07:45 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@652 -- # local es=0 00:09:35.290 18:07:45 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd -s /var/tmp/spdk2.sock framework_enable_cpumask_locks 00:09:35.290 18:07:45 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 00:09:35.290 18:07:45 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:09:35.290 18:07:45 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@644 -- # type -t rpc_cmd 00:09:35.290 18:07:45 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:09:35.290 18:07:45 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@655 -- # rpc_cmd -s /var/tmp/spdk2.sock framework_enable_cpumask_locks 00:09:35.290 18:07:45 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:35.290 18:07:45 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:09:35.290 [2024-12-06 18:07:45.261490] app.c: 781:claim_cpu_cores: *ERROR*: Cannot create lock on core 2, probably process 60391 has claimed it. 00:09:35.290 request: 00:09:35.290 { 00:09:35.290 "method": "framework_enable_cpumask_locks", 00:09:35.290 "req_id": 1 00:09:35.290 } 00:09:35.290 Got JSON-RPC error response 00:09:35.290 response: 00:09:35.290 { 00:09:35.290 "code": -32603, 00:09:35.290 "message": "Failed to claim CPU core: 2" 00:09:35.290 } 00:09:35.290 18:07:45 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:09:35.290 18:07:45 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@655 -- # es=1 00:09:35.290 18:07:45 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:09:35.290 18:07:45 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:09:35.290 18:07:45 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:09:35.290 18:07:45 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@158 -- # waitforlisten 60391 /var/tmp/spdk.sock 00:09:35.290 18:07:45 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@835 -- # '[' -z 60391 ']' 00:09:35.290 18:07:45 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:09:35.290 18:07:45 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@840 -- # local max_retries=100 00:09:35.290 18:07:45 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:09:35.290 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:09:35.290 18:07:45 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@844 -- # xtrace_disable 00:09:35.290 18:07:45 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:09:35.290 18:07:45 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:09:35.290 18:07:45 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@868 -- # return 0 00:09:35.290 18:07:45 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@159 -- # waitforlisten 60415 /var/tmp/spdk2.sock 00:09:35.290 18:07:45 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@835 -- # '[' -z 60415 ']' 00:09:35.290 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:09:35.290 18:07:45 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk2.sock 00:09:35.290 18:07:45 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@840 -- # local max_retries=100 00:09:35.290 18:07:45 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:09:35.290 18:07:45 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@844 -- # xtrace_disable 00:09:35.290 18:07:45 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:09:35.290 18:07:45 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:09:35.290 18:07:45 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@868 -- # return 0 00:09:35.290 18:07:45 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@161 -- # check_remaining_locks 00:09:35.290 18:07:45 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@36 -- # locks=(/var/tmp/spdk_cpu_lock_*) 00:09:35.290 18:07:45 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@37 -- # locks_expected=(/var/tmp/spdk_cpu_lock_{000..002}) 00:09:35.290 18:07:45 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@38 -- # [[ /var/tmp/spdk_cpu_lock_000 /var/tmp/spdk_cpu_lock_001 /var/tmp/spdk_cpu_lock_002 == \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\0\ \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\1\ \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\2 ]] 00:09:35.290 00:09:35.290 real 0m4.512s 00:09:35.290 user 0m1.314s 00:09:35.290 sys 0m0.255s 00:09:35.290 18:07:45 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@1130 -- # xtrace_disable 00:09:35.290 18:07:45 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:09:35.290 ************************************ 00:09:35.290 END TEST locking_overlapped_coremask_via_rpc 00:09:35.290 ************************************ 00:09:35.290 18:07:45 event.cpu_locks -- event/cpu_locks.sh@174 -- # cleanup 00:09:35.290 18:07:45 event.cpu_locks -- event/cpu_locks.sh@15 -- # [[ -z 60391 ]] 00:09:35.290 18:07:45 event.cpu_locks -- event/cpu_locks.sh@15 -- # killprocess 60391 00:09:35.290 18:07:45 event.cpu_locks -- common/autotest_common.sh@954 -- # '[' -z 60391 ']' 00:09:35.290 18:07:45 event.cpu_locks -- common/autotest_common.sh@958 -- # kill -0 60391 00:09:35.290 18:07:45 event.cpu_locks -- common/autotest_common.sh@959 -- # uname 00:09:35.290 18:07:45 event.cpu_locks -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:09:35.290 18:07:45 event.cpu_locks -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 60391 00:09:35.549 killing process with pid 60391 00:09:35.549 18:07:45 event.cpu_locks -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:09:35.549 18:07:45 event.cpu_locks -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:09:35.549 18:07:45 event.cpu_locks -- common/autotest_common.sh@972 -- # echo 'killing process with pid 60391' 00:09:35.549 18:07:45 event.cpu_locks -- common/autotest_common.sh@973 -- # kill 60391 00:09:35.549 18:07:45 event.cpu_locks -- common/autotest_common.sh@978 -- # wait 60391 00:09:38.122 18:07:48 event.cpu_locks -- event/cpu_locks.sh@16 -- # [[ -z 60415 ]] 00:09:38.122 18:07:48 event.cpu_locks -- event/cpu_locks.sh@16 -- # killprocess 60415 00:09:38.122 18:07:48 event.cpu_locks -- common/autotest_common.sh@954 -- # '[' -z 60415 ']' 00:09:38.122 18:07:48 event.cpu_locks -- common/autotest_common.sh@958 -- # kill -0 60415 00:09:38.122 18:07:48 event.cpu_locks -- common/autotest_common.sh@959 -- # uname 00:09:38.122 18:07:48 event.cpu_locks -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:09:38.122 18:07:48 event.cpu_locks -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 60415 00:09:38.122 killing process with pid 60415 00:09:38.122 18:07:48 event.cpu_locks -- common/autotest_common.sh@960 -- # process_name=reactor_2 00:09:38.122 18:07:48 event.cpu_locks -- common/autotest_common.sh@964 -- # '[' reactor_2 = sudo ']' 00:09:38.122 18:07:48 event.cpu_locks -- common/autotest_common.sh@972 -- # echo 'killing process with pid 60415' 00:09:38.122 18:07:48 event.cpu_locks -- common/autotest_common.sh@973 -- # kill 60415 00:09:38.122 18:07:48 event.cpu_locks -- common/autotest_common.sh@978 -- # wait 60415 00:09:40.653 18:07:50 event.cpu_locks -- event/cpu_locks.sh@18 -- # rm -f 00:09:40.653 18:07:50 event.cpu_locks -- event/cpu_locks.sh@1 -- # cleanup 00:09:40.653 18:07:50 event.cpu_locks -- event/cpu_locks.sh@15 -- # [[ -z 60391 ]] 00:09:40.653 18:07:50 event.cpu_locks -- event/cpu_locks.sh@15 -- # killprocess 60391 00:09:40.653 18:07:50 event.cpu_locks -- common/autotest_common.sh@954 -- # '[' -z 60391 ']' 00:09:40.653 Process with pid 60391 is not found 00:09:40.653 18:07:50 event.cpu_locks -- common/autotest_common.sh@958 -- # kill -0 60391 00:09:40.653 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 958: kill: (60391) - No such process 00:09:40.653 18:07:50 event.cpu_locks -- common/autotest_common.sh@981 -- # echo 'Process with pid 60391 is not found' 00:09:40.653 18:07:50 event.cpu_locks -- event/cpu_locks.sh@16 -- # [[ -z 60415 ]] 00:09:40.653 18:07:50 event.cpu_locks -- event/cpu_locks.sh@16 -- # killprocess 60415 00:09:40.653 18:07:50 event.cpu_locks -- common/autotest_common.sh@954 -- # '[' -z 60415 ']' 00:09:40.653 Process with pid 60415 is not found 00:09:40.653 18:07:50 event.cpu_locks -- common/autotest_common.sh@958 -- # kill -0 60415 00:09:40.653 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 958: kill: (60415) - No such process 00:09:40.653 18:07:50 event.cpu_locks -- common/autotest_common.sh@981 -- # echo 'Process with pid 60415 is not found' 00:09:40.653 18:07:50 event.cpu_locks -- event/cpu_locks.sh@18 -- # rm -f 00:09:40.653 00:09:40.653 real 0m53.449s 00:09:40.653 user 1m30.613s 00:09:40.653 sys 0m7.335s 00:09:40.653 18:07:50 event.cpu_locks -- common/autotest_common.sh@1130 -- # xtrace_disable 00:09:40.653 ************************************ 00:09:40.653 END TEST cpu_locks 00:09:40.653 ************************************ 00:09:40.653 18:07:50 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:09:40.653 ************************************ 00:09:40.653 END TEST event 00:09:40.653 ************************************ 00:09:40.653 00:09:40.653 real 1m23.426s 00:09:40.653 user 2m28.604s 00:09:40.653 sys 0m11.907s 00:09:40.653 18:07:51 event -- common/autotest_common.sh@1130 -- # xtrace_disable 00:09:40.653 18:07:51 event -- common/autotest_common.sh@10 -- # set +x 00:09:40.653 18:07:51 -- spdk/autotest.sh@169 -- # run_test thread /home/vagrant/spdk_repo/spdk/test/thread/thread.sh 00:09:40.653 18:07:51 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:09:40.653 18:07:51 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:09:40.653 18:07:51 -- common/autotest_common.sh@10 -- # set +x 00:09:40.653 ************************************ 00:09:40.653 START TEST thread 00:09:40.653 ************************************ 00:09:40.653 18:07:51 thread -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/thread/thread.sh 00:09:40.912 * Looking for test storage... 00:09:40.912 * Found test storage at /home/vagrant/spdk_repo/spdk/test/thread 00:09:40.912 18:07:51 thread -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:09:40.912 18:07:51 thread -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:09:40.912 18:07:51 thread -- common/autotest_common.sh@1711 -- # lcov --version 00:09:40.912 18:07:51 thread -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:09:40.912 18:07:51 thread -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:09:40.912 18:07:51 thread -- scripts/common.sh@333 -- # local ver1 ver1_l 00:09:40.912 18:07:51 thread -- scripts/common.sh@334 -- # local ver2 ver2_l 00:09:40.912 18:07:51 thread -- scripts/common.sh@336 -- # IFS=.-: 00:09:40.912 18:07:51 thread -- scripts/common.sh@336 -- # read -ra ver1 00:09:40.912 18:07:51 thread -- scripts/common.sh@337 -- # IFS=.-: 00:09:40.912 18:07:51 thread -- scripts/common.sh@337 -- # read -ra ver2 00:09:40.912 18:07:51 thread -- scripts/common.sh@338 -- # local 'op=<' 00:09:40.912 18:07:51 thread -- scripts/common.sh@340 -- # ver1_l=2 00:09:40.912 18:07:51 thread -- scripts/common.sh@341 -- # ver2_l=1 00:09:40.912 18:07:51 thread -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:09:40.912 18:07:51 thread -- scripts/common.sh@344 -- # case "$op" in 00:09:40.912 18:07:51 thread -- scripts/common.sh@345 -- # : 1 00:09:40.912 18:07:51 thread -- scripts/common.sh@364 -- # (( v = 0 )) 00:09:40.912 18:07:51 thread -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:09:40.912 18:07:51 thread -- scripts/common.sh@365 -- # decimal 1 00:09:40.912 18:07:51 thread -- scripts/common.sh@353 -- # local d=1 00:09:40.912 18:07:51 thread -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:09:40.912 18:07:51 thread -- scripts/common.sh@355 -- # echo 1 00:09:40.912 18:07:51 thread -- scripts/common.sh@365 -- # ver1[v]=1 00:09:40.912 18:07:51 thread -- scripts/common.sh@366 -- # decimal 2 00:09:40.912 18:07:51 thread -- scripts/common.sh@353 -- # local d=2 00:09:40.912 18:07:51 thread -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:09:40.912 18:07:51 thread -- scripts/common.sh@355 -- # echo 2 00:09:40.912 18:07:51 thread -- scripts/common.sh@366 -- # ver2[v]=2 00:09:40.912 18:07:51 thread -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:09:40.912 18:07:51 thread -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:09:40.912 18:07:51 thread -- scripts/common.sh@368 -- # return 0 00:09:40.912 18:07:51 thread -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:09:40.912 18:07:51 thread -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:09:40.912 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:40.912 --rc genhtml_branch_coverage=1 00:09:40.912 --rc genhtml_function_coverage=1 00:09:40.912 --rc genhtml_legend=1 00:09:40.912 --rc geninfo_all_blocks=1 00:09:40.912 --rc geninfo_unexecuted_blocks=1 00:09:40.912 00:09:40.912 ' 00:09:40.912 18:07:51 thread -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:09:40.912 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:40.912 --rc genhtml_branch_coverage=1 00:09:40.912 --rc genhtml_function_coverage=1 00:09:40.912 --rc genhtml_legend=1 00:09:40.913 --rc geninfo_all_blocks=1 00:09:40.913 --rc geninfo_unexecuted_blocks=1 00:09:40.913 00:09:40.913 ' 00:09:40.913 18:07:51 thread -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:09:40.913 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:40.913 --rc genhtml_branch_coverage=1 00:09:40.913 --rc genhtml_function_coverage=1 00:09:40.913 --rc genhtml_legend=1 00:09:40.913 --rc geninfo_all_blocks=1 00:09:40.913 --rc geninfo_unexecuted_blocks=1 00:09:40.913 00:09:40.913 ' 00:09:40.913 18:07:51 thread -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:09:40.913 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:40.913 --rc genhtml_branch_coverage=1 00:09:40.913 --rc genhtml_function_coverage=1 00:09:40.913 --rc genhtml_legend=1 00:09:40.913 --rc geninfo_all_blocks=1 00:09:40.913 --rc geninfo_unexecuted_blocks=1 00:09:40.913 00:09:40.913 ' 00:09:40.913 18:07:51 thread -- thread/thread.sh@11 -- # run_test thread_poller_perf /home/vagrant/spdk_repo/spdk/test/thread/poller_perf/poller_perf -b 1000 -l 1 -t 1 00:09:40.913 18:07:51 thread -- common/autotest_common.sh@1105 -- # '[' 8 -le 1 ']' 00:09:40.913 18:07:51 thread -- common/autotest_common.sh@1111 -- # xtrace_disable 00:09:40.913 18:07:51 thread -- common/autotest_common.sh@10 -- # set +x 00:09:40.913 ************************************ 00:09:40.913 START TEST thread_poller_perf 00:09:40.913 ************************************ 00:09:40.913 18:07:51 thread.thread_poller_perf -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/thread/poller_perf/poller_perf -b 1000 -l 1 -t 1 00:09:40.913 [2024-12-06 18:07:51.419789] Starting SPDK v25.01-pre git sha1 1148849d6 / DPDK 24.03.0 initialization... 00:09:40.913 [2024-12-06 18:07:51.420120] [ DPDK EAL parameters: poller_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid60613 ] 00:09:41.171 [2024-12-06 18:07:51.606000] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:41.171 [2024-12-06 18:07:51.740289] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:09:41.171 Running 1000 pollers for 1 seconds with 1 microseconds period. 00:09:42.547 [2024-12-06T18:07:53.123Z] ====================================== 00:09:42.547 [2024-12-06T18:07:53.123Z] busy:2505128750 (cyc) 00:09:42.547 [2024-12-06T18:07:53.123Z] total_run_count: 352000 00:09:42.547 [2024-12-06T18:07:53.123Z] tsc_hz: 2490000000 (cyc) 00:09:42.547 [2024-12-06T18:07:53.123Z] ====================================== 00:09:42.547 [2024-12-06T18:07:53.123Z] poller_cost: 7116 (cyc), 2857 (nsec) 00:09:42.547 00:09:42.547 real 0m1.633s 00:09:42.547 user 0m1.416s 00:09:42.547 sys 0m0.105s 00:09:42.547 18:07:53 thread.thread_poller_perf -- common/autotest_common.sh@1130 -- # xtrace_disable 00:09:42.547 ************************************ 00:09:42.547 END TEST thread_poller_perf 00:09:42.547 ************************************ 00:09:42.547 18:07:53 thread.thread_poller_perf -- common/autotest_common.sh@10 -- # set +x 00:09:42.547 18:07:53 thread -- thread/thread.sh@12 -- # run_test thread_poller_perf /home/vagrant/spdk_repo/spdk/test/thread/poller_perf/poller_perf -b 1000 -l 0 -t 1 00:09:42.547 18:07:53 thread -- common/autotest_common.sh@1105 -- # '[' 8 -le 1 ']' 00:09:42.547 18:07:53 thread -- common/autotest_common.sh@1111 -- # xtrace_disable 00:09:42.547 18:07:53 thread -- common/autotest_common.sh@10 -- # set +x 00:09:42.547 ************************************ 00:09:42.547 START TEST thread_poller_perf 00:09:42.547 ************************************ 00:09:42.547 18:07:53 thread.thread_poller_perf -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/thread/poller_perf/poller_perf -b 1000 -l 0 -t 1 00:09:42.804 [2024-12-06 18:07:53.134718] Starting SPDK v25.01-pre git sha1 1148849d6 / DPDK 24.03.0 initialization... 00:09:42.805 [2024-12-06 18:07:53.134854] [ DPDK EAL parameters: poller_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid60655 ] 00:09:42.805 [2024-12-06 18:07:53.323490] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:43.062 Running 1000 pollers for 1 seconds with 0 microseconds period. 00:09:43.062 [2024-12-06 18:07:53.453291] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:09:44.434 [2024-12-06T18:07:55.010Z] ====================================== 00:09:44.434 [2024-12-06T18:07:55.010Z] busy:2494466674 (cyc) 00:09:44.434 [2024-12-06T18:07:55.010Z] total_run_count: 4252000 00:09:44.434 [2024-12-06T18:07:55.010Z] tsc_hz: 2490000000 (cyc) 00:09:44.434 [2024-12-06T18:07:55.010Z] ====================================== 00:09:44.434 [2024-12-06T18:07:55.011Z] poller_cost: 586 (cyc), 235 (nsec) 00:09:44.435 00:09:44.435 real 0m1.623s 00:09:44.435 user 0m1.386s 00:09:44.435 sys 0m0.127s 00:09:44.435 18:07:54 thread.thread_poller_perf -- common/autotest_common.sh@1130 -- # xtrace_disable 00:09:44.435 18:07:54 thread.thread_poller_perf -- common/autotest_common.sh@10 -- # set +x 00:09:44.435 ************************************ 00:09:44.435 END TEST thread_poller_perf 00:09:44.435 ************************************ 00:09:44.435 18:07:54 thread -- thread/thread.sh@17 -- # [[ y != \y ]] 00:09:44.435 00:09:44.435 real 0m3.633s 00:09:44.435 user 0m2.973s 00:09:44.435 sys 0m0.451s 00:09:44.435 ************************************ 00:09:44.435 END TEST thread 00:09:44.435 ************************************ 00:09:44.435 18:07:54 thread -- common/autotest_common.sh@1130 -- # xtrace_disable 00:09:44.435 18:07:54 thread -- common/autotest_common.sh@10 -- # set +x 00:09:44.435 18:07:54 -- spdk/autotest.sh@171 -- # [[ 0 -eq 1 ]] 00:09:44.435 18:07:54 -- spdk/autotest.sh@176 -- # run_test app_cmdline /home/vagrant/spdk_repo/spdk/test/app/cmdline.sh 00:09:44.435 18:07:54 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:09:44.435 18:07:54 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:09:44.435 18:07:54 -- common/autotest_common.sh@10 -- # set +x 00:09:44.435 ************************************ 00:09:44.435 START TEST app_cmdline 00:09:44.435 ************************************ 00:09:44.435 18:07:54 app_cmdline -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/app/cmdline.sh 00:09:44.435 * Looking for test storage... 00:09:44.435 * Found test storage at /home/vagrant/spdk_repo/spdk/test/app 00:09:44.435 18:07:54 app_cmdline -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:09:44.435 18:07:54 app_cmdline -- common/autotest_common.sh@1711 -- # lcov --version 00:09:44.435 18:07:54 app_cmdline -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:09:44.693 18:07:55 app_cmdline -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:09:44.693 18:07:55 app_cmdline -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:09:44.693 18:07:55 app_cmdline -- scripts/common.sh@333 -- # local ver1 ver1_l 00:09:44.693 18:07:55 app_cmdline -- scripts/common.sh@334 -- # local ver2 ver2_l 00:09:44.693 18:07:55 app_cmdline -- scripts/common.sh@336 -- # IFS=.-: 00:09:44.693 18:07:55 app_cmdline -- scripts/common.sh@336 -- # read -ra ver1 00:09:44.693 18:07:55 app_cmdline -- scripts/common.sh@337 -- # IFS=.-: 00:09:44.693 18:07:55 app_cmdline -- scripts/common.sh@337 -- # read -ra ver2 00:09:44.693 18:07:55 app_cmdline -- scripts/common.sh@338 -- # local 'op=<' 00:09:44.693 18:07:55 app_cmdline -- scripts/common.sh@340 -- # ver1_l=2 00:09:44.693 18:07:55 app_cmdline -- scripts/common.sh@341 -- # ver2_l=1 00:09:44.693 18:07:55 app_cmdline -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:09:44.693 18:07:55 app_cmdline -- scripts/common.sh@344 -- # case "$op" in 00:09:44.693 18:07:55 app_cmdline -- scripts/common.sh@345 -- # : 1 00:09:44.693 18:07:55 app_cmdline -- scripts/common.sh@364 -- # (( v = 0 )) 00:09:44.693 18:07:55 app_cmdline -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:09:44.693 18:07:55 app_cmdline -- scripts/common.sh@365 -- # decimal 1 00:09:44.693 18:07:55 app_cmdline -- scripts/common.sh@353 -- # local d=1 00:09:44.693 18:07:55 app_cmdline -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:09:44.693 18:07:55 app_cmdline -- scripts/common.sh@355 -- # echo 1 00:09:44.693 18:07:55 app_cmdline -- scripts/common.sh@365 -- # ver1[v]=1 00:09:44.693 18:07:55 app_cmdline -- scripts/common.sh@366 -- # decimal 2 00:09:44.693 18:07:55 app_cmdline -- scripts/common.sh@353 -- # local d=2 00:09:44.693 18:07:55 app_cmdline -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:09:44.693 18:07:55 app_cmdline -- scripts/common.sh@355 -- # echo 2 00:09:44.693 18:07:55 app_cmdline -- scripts/common.sh@366 -- # ver2[v]=2 00:09:44.693 18:07:55 app_cmdline -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:09:44.694 18:07:55 app_cmdline -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:09:44.694 18:07:55 app_cmdline -- scripts/common.sh@368 -- # return 0 00:09:44.694 18:07:55 app_cmdline -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:09:44.694 18:07:55 app_cmdline -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:09:44.694 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:44.694 --rc genhtml_branch_coverage=1 00:09:44.694 --rc genhtml_function_coverage=1 00:09:44.694 --rc genhtml_legend=1 00:09:44.694 --rc geninfo_all_blocks=1 00:09:44.694 --rc geninfo_unexecuted_blocks=1 00:09:44.694 00:09:44.694 ' 00:09:44.694 18:07:55 app_cmdline -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:09:44.694 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:44.694 --rc genhtml_branch_coverage=1 00:09:44.694 --rc genhtml_function_coverage=1 00:09:44.694 --rc genhtml_legend=1 00:09:44.694 --rc geninfo_all_blocks=1 00:09:44.694 --rc geninfo_unexecuted_blocks=1 00:09:44.694 00:09:44.694 ' 00:09:44.694 18:07:55 app_cmdline -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:09:44.694 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:44.694 --rc genhtml_branch_coverage=1 00:09:44.694 --rc genhtml_function_coverage=1 00:09:44.694 --rc genhtml_legend=1 00:09:44.694 --rc geninfo_all_blocks=1 00:09:44.694 --rc geninfo_unexecuted_blocks=1 00:09:44.694 00:09:44.694 ' 00:09:44.694 18:07:55 app_cmdline -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:09:44.694 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:44.694 --rc genhtml_branch_coverage=1 00:09:44.694 --rc genhtml_function_coverage=1 00:09:44.694 --rc genhtml_legend=1 00:09:44.694 --rc geninfo_all_blocks=1 00:09:44.694 --rc geninfo_unexecuted_blocks=1 00:09:44.694 00:09:44.694 ' 00:09:44.694 18:07:55 app_cmdline -- app/cmdline.sh@14 -- # trap 'killprocess $spdk_tgt_pid' EXIT 00:09:44.694 18:07:55 app_cmdline -- app/cmdline.sh@17 -- # spdk_tgt_pid=60744 00:09:44.694 18:07:55 app_cmdline -- app/cmdline.sh@16 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt --rpcs-allowed spdk_get_version,rpc_get_methods 00:09:44.694 18:07:55 app_cmdline -- app/cmdline.sh@18 -- # waitforlisten 60744 00:09:44.694 18:07:55 app_cmdline -- common/autotest_common.sh@835 -- # '[' -z 60744 ']' 00:09:44.694 18:07:55 app_cmdline -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:09:44.694 18:07:55 app_cmdline -- common/autotest_common.sh@840 -- # local max_retries=100 00:09:44.694 18:07:55 app_cmdline -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:09:44.694 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:09:44.694 18:07:55 app_cmdline -- common/autotest_common.sh@844 -- # xtrace_disable 00:09:44.694 18:07:55 app_cmdline -- common/autotest_common.sh@10 -- # set +x 00:09:44.694 [2024-12-06 18:07:55.191945] Starting SPDK v25.01-pre git sha1 1148849d6 / DPDK 24.03.0 initialization... 00:09:44.694 [2024-12-06 18:07:55.192299] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid60744 ] 00:09:44.952 [2024-12-06 18:07:55.382898] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:44.952 [2024-12-06 18:07:55.514595] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:09:46.323 18:07:56 app_cmdline -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:09:46.323 18:07:56 app_cmdline -- common/autotest_common.sh@868 -- # return 0 00:09:46.323 18:07:56 app_cmdline -- app/cmdline.sh@20 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py spdk_get_version 00:09:46.323 { 00:09:46.323 "version": "SPDK v25.01-pre git sha1 1148849d6", 00:09:46.323 "fields": { 00:09:46.323 "major": 25, 00:09:46.323 "minor": 1, 00:09:46.323 "patch": 0, 00:09:46.323 "suffix": "-pre", 00:09:46.323 "commit": "1148849d6" 00:09:46.323 } 00:09:46.323 } 00:09:46.323 18:07:56 app_cmdline -- app/cmdline.sh@22 -- # expected_methods=() 00:09:46.323 18:07:56 app_cmdline -- app/cmdline.sh@23 -- # expected_methods+=("rpc_get_methods") 00:09:46.323 18:07:56 app_cmdline -- app/cmdline.sh@24 -- # expected_methods+=("spdk_get_version") 00:09:46.323 18:07:56 app_cmdline -- app/cmdline.sh@26 -- # methods=($(rpc_cmd rpc_get_methods | jq -r ".[]" | sort)) 00:09:46.323 18:07:56 app_cmdline -- app/cmdline.sh@26 -- # sort 00:09:46.323 18:07:56 app_cmdline -- app/cmdline.sh@26 -- # rpc_cmd rpc_get_methods 00:09:46.323 18:07:56 app_cmdline -- app/cmdline.sh@26 -- # jq -r '.[]' 00:09:46.323 18:07:56 app_cmdline -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:46.323 18:07:56 app_cmdline -- common/autotest_common.sh@10 -- # set +x 00:09:46.323 18:07:56 app_cmdline -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:46.323 18:07:56 app_cmdline -- app/cmdline.sh@27 -- # (( 2 == 2 )) 00:09:46.323 18:07:56 app_cmdline -- app/cmdline.sh@28 -- # [[ rpc_get_methods spdk_get_version == \r\p\c\_\g\e\t\_\m\e\t\h\o\d\s\ \s\p\d\k\_\g\e\t\_\v\e\r\s\i\o\n ]] 00:09:46.323 18:07:56 app_cmdline -- app/cmdline.sh@30 -- # NOT /home/vagrant/spdk_repo/spdk/scripts/rpc.py env_dpdk_get_mem_stats 00:09:46.324 18:07:56 app_cmdline -- common/autotest_common.sh@652 -- # local es=0 00:09:46.324 18:07:56 app_cmdline -- common/autotest_common.sh@654 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/scripts/rpc.py env_dpdk_get_mem_stats 00:09:46.324 18:07:56 app_cmdline -- common/autotest_common.sh@640 -- # local arg=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:09:46.324 18:07:56 app_cmdline -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:09:46.324 18:07:56 app_cmdline -- common/autotest_common.sh@644 -- # type -t /home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:09:46.324 18:07:56 app_cmdline -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:09:46.324 18:07:56 app_cmdline -- common/autotest_common.sh@646 -- # type -P /home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:09:46.324 18:07:56 app_cmdline -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:09:46.324 18:07:56 app_cmdline -- common/autotest_common.sh@646 -- # arg=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:09:46.324 18:07:56 app_cmdline -- common/autotest_common.sh@646 -- # [[ -x /home/vagrant/spdk_repo/spdk/scripts/rpc.py ]] 00:09:46.324 18:07:56 app_cmdline -- common/autotest_common.sh@655 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py env_dpdk_get_mem_stats 00:09:46.581 request: 00:09:46.581 { 00:09:46.581 "method": "env_dpdk_get_mem_stats", 00:09:46.581 "req_id": 1 00:09:46.581 } 00:09:46.581 Got JSON-RPC error response 00:09:46.581 response: 00:09:46.581 { 00:09:46.581 "code": -32601, 00:09:46.581 "message": "Method not found" 00:09:46.581 } 00:09:46.581 18:07:57 app_cmdline -- common/autotest_common.sh@655 -- # es=1 00:09:46.581 18:07:57 app_cmdline -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:09:46.581 18:07:57 app_cmdline -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:09:46.581 18:07:57 app_cmdline -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:09:46.581 18:07:57 app_cmdline -- app/cmdline.sh@1 -- # killprocess 60744 00:09:46.581 18:07:57 app_cmdline -- common/autotest_common.sh@954 -- # '[' -z 60744 ']' 00:09:46.581 18:07:57 app_cmdline -- common/autotest_common.sh@958 -- # kill -0 60744 00:09:46.581 18:07:57 app_cmdline -- common/autotest_common.sh@959 -- # uname 00:09:46.581 18:07:57 app_cmdline -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:09:46.581 18:07:57 app_cmdline -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 60744 00:09:46.581 18:07:57 app_cmdline -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:09:46.581 killing process with pid 60744 00:09:46.581 18:07:57 app_cmdline -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:09:46.581 18:07:57 app_cmdline -- common/autotest_common.sh@972 -- # echo 'killing process with pid 60744' 00:09:46.581 18:07:57 app_cmdline -- common/autotest_common.sh@973 -- # kill 60744 00:09:46.581 18:07:57 app_cmdline -- common/autotest_common.sh@978 -- # wait 60744 00:09:49.108 ************************************ 00:09:49.108 END TEST app_cmdline 00:09:49.108 ************************************ 00:09:49.108 00:09:49.108 real 0m4.768s 00:09:49.108 user 0m5.070s 00:09:49.108 sys 0m0.683s 00:09:49.108 18:07:59 app_cmdline -- common/autotest_common.sh@1130 -- # xtrace_disable 00:09:49.108 18:07:59 app_cmdline -- common/autotest_common.sh@10 -- # set +x 00:09:49.108 18:07:59 -- spdk/autotest.sh@177 -- # run_test version /home/vagrant/spdk_repo/spdk/test/app/version.sh 00:09:49.108 18:07:59 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:09:49.108 18:07:59 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:09:49.108 18:07:59 -- common/autotest_common.sh@10 -- # set +x 00:09:49.108 ************************************ 00:09:49.108 START TEST version 00:09:49.108 ************************************ 00:09:49.108 18:07:59 version -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/app/version.sh 00:09:49.427 * Looking for test storage... 00:09:49.427 * Found test storage at /home/vagrant/spdk_repo/spdk/test/app 00:09:49.427 18:07:59 version -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:09:49.427 18:07:59 version -- common/autotest_common.sh@1711 -- # lcov --version 00:09:49.427 18:07:59 version -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:09:49.427 18:07:59 version -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:09:49.427 18:07:59 version -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:09:49.427 18:07:59 version -- scripts/common.sh@333 -- # local ver1 ver1_l 00:09:49.427 18:07:59 version -- scripts/common.sh@334 -- # local ver2 ver2_l 00:09:49.427 18:07:59 version -- scripts/common.sh@336 -- # IFS=.-: 00:09:49.427 18:07:59 version -- scripts/common.sh@336 -- # read -ra ver1 00:09:49.427 18:07:59 version -- scripts/common.sh@337 -- # IFS=.-: 00:09:49.427 18:07:59 version -- scripts/common.sh@337 -- # read -ra ver2 00:09:49.427 18:07:59 version -- scripts/common.sh@338 -- # local 'op=<' 00:09:49.427 18:07:59 version -- scripts/common.sh@340 -- # ver1_l=2 00:09:49.427 18:07:59 version -- scripts/common.sh@341 -- # ver2_l=1 00:09:49.427 18:07:59 version -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:09:49.427 18:07:59 version -- scripts/common.sh@344 -- # case "$op" in 00:09:49.427 18:07:59 version -- scripts/common.sh@345 -- # : 1 00:09:49.427 18:07:59 version -- scripts/common.sh@364 -- # (( v = 0 )) 00:09:49.427 18:07:59 version -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:09:49.427 18:07:59 version -- scripts/common.sh@365 -- # decimal 1 00:09:49.427 18:07:59 version -- scripts/common.sh@353 -- # local d=1 00:09:49.427 18:07:59 version -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:09:49.427 18:07:59 version -- scripts/common.sh@355 -- # echo 1 00:09:49.427 18:07:59 version -- scripts/common.sh@365 -- # ver1[v]=1 00:09:49.427 18:07:59 version -- scripts/common.sh@366 -- # decimal 2 00:09:49.427 18:07:59 version -- scripts/common.sh@353 -- # local d=2 00:09:49.427 18:07:59 version -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:09:49.427 18:07:59 version -- scripts/common.sh@355 -- # echo 2 00:09:49.427 18:07:59 version -- scripts/common.sh@366 -- # ver2[v]=2 00:09:49.427 18:07:59 version -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:09:49.427 18:07:59 version -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:09:49.427 18:07:59 version -- scripts/common.sh@368 -- # return 0 00:09:49.427 18:07:59 version -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:09:49.427 18:07:59 version -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:09:49.427 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:49.427 --rc genhtml_branch_coverage=1 00:09:49.427 --rc genhtml_function_coverage=1 00:09:49.427 --rc genhtml_legend=1 00:09:49.427 --rc geninfo_all_blocks=1 00:09:49.427 --rc geninfo_unexecuted_blocks=1 00:09:49.427 00:09:49.427 ' 00:09:49.427 18:07:59 version -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:09:49.427 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:49.427 --rc genhtml_branch_coverage=1 00:09:49.427 --rc genhtml_function_coverage=1 00:09:49.427 --rc genhtml_legend=1 00:09:49.427 --rc geninfo_all_blocks=1 00:09:49.427 --rc geninfo_unexecuted_blocks=1 00:09:49.427 00:09:49.427 ' 00:09:49.427 18:07:59 version -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:09:49.427 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:49.427 --rc genhtml_branch_coverage=1 00:09:49.428 --rc genhtml_function_coverage=1 00:09:49.428 --rc genhtml_legend=1 00:09:49.428 --rc geninfo_all_blocks=1 00:09:49.428 --rc geninfo_unexecuted_blocks=1 00:09:49.428 00:09:49.428 ' 00:09:49.428 18:07:59 version -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:09:49.428 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:49.428 --rc genhtml_branch_coverage=1 00:09:49.428 --rc genhtml_function_coverage=1 00:09:49.428 --rc genhtml_legend=1 00:09:49.428 --rc geninfo_all_blocks=1 00:09:49.428 --rc geninfo_unexecuted_blocks=1 00:09:49.428 00:09:49.428 ' 00:09:49.428 18:07:59 version -- app/version.sh@17 -- # get_header_version major 00:09:49.428 18:07:59 version -- app/version.sh@13 -- # grep -E '^#define SPDK_VERSION_MAJOR[[:space:]]+' /home/vagrant/spdk_repo/spdk/include/spdk/version.h 00:09:49.428 18:07:59 version -- app/version.sh@14 -- # cut -f2 00:09:49.428 18:07:59 version -- app/version.sh@14 -- # tr -d '"' 00:09:49.428 18:07:59 version -- app/version.sh@17 -- # major=25 00:09:49.428 18:07:59 version -- app/version.sh@18 -- # get_header_version minor 00:09:49.428 18:07:59 version -- app/version.sh@13 -- # grep -E '^#define SPDK_VERSION_MINOR[[:space:]]+' /home/vagrant/spdk_repo/spdk/include/spdk/version.h 00:09:49.428 18:07:59 version -- app/version.sh@14 -- # cut -f2 00:09:49.428 18:07:59 version -- app/version.sh@14 -- # tr -d '"' 00:09:49.428 18:07:59 version -- app/version.sh@18 -- # minor=1 00:09:49.428 18:07:59 version -- app/version.sh@19 -- # get_header_version patch 00:09:49.428 18:07:59 version -- app/version.sh@13 -- # grep -E '^#define SPDK_VERSION_PATCH[[:space:]]+' /home/vagrant/spdk_repo/spdk/include/spdk/version.h 00:09:49.428 18:07:59 version -- app/version.sh@14 -- # cut -f2 00:09:49.428 18:07:59 version -- app/version.sh@14 -- # tr -d '"' 00:09:49.428 18:07:59 version -- app/version.sh@19 -- # patch=0 00:09:49.428 18:07:59 version -- app/version.sh@20 -- # get_header_version suffix 00:09:49.428 18:07:59 version -- app/version.sh@13 -- # grep -E '^#define SPDK_VERSION_SUFFIX[[:space:]]+' /home/vagrant/spdk_repo/spdk/include/spdk/version.h 00:09:49.428 18:07:59 version -- app/version.sh@14 -- # cut -f2 00:09:49.428 18:07:59 version -- app/version.sh@14 -- # tr -d '"' 00:09:49.428 18:07:59 version -- app/version.sh@20 -- # suffix=-pre 00:09:49.428 18:07:59 version -- app/version.sh@22 -- # version=25.1 00:09:49.428 18:07:59 version -- app/version.sh@25 -- # (( patch != 0 )) 00:09:49.428 18:07:59 version -- app/version.sh@28 -- # version=25.1rc0 00:09:49.428 18:07:59 version -- app/version.sh@30 -- # PYTHONPATH=:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/python 00:09:49.428 18:07:59 version -- app/version.sh@30 -- # python3 -c 'import spdk; print(spdk.__version__)' 00:09:49.428 18:07:59 version -- app/version.sh@30 -- # py_version=25.1rc0 00:09:49.428 18:07:59 version -- app/version.sh@31 -- # [[ 25.1rc0 == \2\5\.\1\r\c\0 ]] 00:09:49.428 00:09:49.428 real 0m0.328s 00:09:49.428 user 0m0.204s 00:09:49.428 sys 0m0.180s 00:09:49.687 ************************************ 00:09:49.687 END TEST version 00:09:49.687 ************************************ 00:09:49.687 18:08:00 version -- common/autotest_common.sh@1130 -- # xtrace_disable 00:09:49.687 18:08:00 version -- common/autotest_common.sh@10 -- # set +x 00:09:49.687 18:08:00 -- spdk/autotest.sh@179 -- # '[' 0 -eq 1 ']' 00:09:49.687 18:08:00 -- spdk/autotest.sh@188 -- # [[ 0 -eq 1 ]] 00:09:49.687 18:08:00 -- spdk/autotest.sh@194 -- # uname -s 00:09:49.687 18:08:00 -- spdk/autotest.sh@194 -- # [[ Linux == Linux ]] 00:09:49.687 18:08:00 -- spdk/autotest.sh@195 -- # [[ 0 -eq 1 ]] 00:09:49.687 18:08:00 -- spdk/autotest.sh@195 -- # [[ 0 -eq 1 ]] 00:09:49.687 18:08:00 -- spdk/autotest.sh@207 -- # '[' 1 -eq 1 ']' 00:09:49.687 18:08:00 -- spdk/autotest.sh@208 -- # run_test blockdev_nvme /home/vagrant/spdk_repo/spdk/test/bdev/blockdev.sh nvme 00:09:49.687 18:08:00 -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:09:49.687 18:08:00 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:09:49.687 18:08:00 -- common/autotest_common.sh@10 -- # set +x 00:09:49.687 ************************************ 00:09:49.687 START TEST blockdev_nvme 00:09:49.687 ************************************ 00:09:49.687 18:08:00 blockdev_nvme -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/bdev/blockdev.sh nvme 00:09:49.687 * Looking for test storage... 00:09:49.687 * Found test storage at /home/vagrant/spdk_repo/spdk/test/bdev 00:09:49.687 18:08:00 blockdev_nvme -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:09:49.687 18:08:00 blockdev_nvme -- common/autotest_common.sh@1711 -- # lcov --version 00:09:49.687 18:08:00 blockdev_nvme -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:09:49.947 18:08:00 blockdev_nvme -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:09:49.947 18:08:00 blockdev_nvme -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:09:49.947 18:08:00 blockdev_nvme -- scripts/common.sh@333 -- # local ver1 ver1_l 00:09:49.947 18:08:00 blockdev_nvme -- scripts/common.sh@334 -- # local ver2 ver2_l 00:09:49.947 18:08:00 blockdev_nvme -- scripts/common.sh@336 -- # IFS=.-: 00:09:49.947 18:08:00 blockdev_nvme -- scripts/common.sh@336 -- # read -ra ver1 00:09:49.947 18:08:00 blockdev_nvme -- scripts/common.sh@337 -- # IFS=.-: 00:09:49.947 18:08:00 blockdev_nvme -- scripts/common.sh@337 -- # read -ra ver2 00:09:49.947 18:08:00 blockdev_nvme -- scripts/common.sh@338 -- # local 'op=<' 00:09:49.947 18:08:00 blockdev_nvme -- scripts/common.sh@340 -- # ver1_l=2 00:09:49.947 18:08:00 blockdev_nvme -- scripts/common.sh@341 -- # ver2_l=1 00:09:49.947 18:08:00 blockdev_nvme -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:09:49.947 18:08:00 blockdev_nvme -- scripts/common.sh@344 -- # case "$op" in 00:09:49.947 18:08:00 blockdev_nvme -- scripts/common.sh@345 -- # : 1 00:09:49.947 18:08:00 blockdev_nvme -- scripts/common.sh@364 -- # (( v = 0 )) 00:09:49.947 18:08:00 blockdev_nvme -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:09:49.947 18:08:00 blockdev_nvme -- scripts/common.sh@365 -- # decimal 1 00:09:49.947 18:08:00 blockdev_nvme -- scripts/common.sh@353 -- # local d=1 00:09:49.947 18:08:00 blockdev_nvme -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:09:49.947 18:08:00 blockdev_nvme -- scripts/common.sh@355 -- # echo 1 00:09:49.947 18:08:00 blockdev_nvme -- scripts/common.sh@365 -- # ver1[v]=1 00:09:49.947 18:08:00 blockdev_nvme -- scripts/common.sh@366 -- # decimal 2 00:09:49.947 18:08:00 blockdev_nvme -- scripts/common.sh@353 -- # local d=2 00:09:49.947 18:08:00 blockdev_nvme -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:09:49.947 18:08:00 blockdev_nvme -- scripts/common.sh@355 -- # echo 2 00:09:49.947 18:08:00 blockdev_nvme -- scripts/common.sh@366 -- # ver2[v]=2 00:09:49.947 18:08:00 blockdev_nvme -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:09:49.947 18:08:00 blockdev_nvme -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:09:49.947 18:08:00 blockdev_nvme -- scripts/common.sh@368 -- # return 0 00:09:49.947 18:08:00 blockdev_nvme -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:09:49.947 18:08:00 blockdev_nvme -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:09:49.947 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:49.947 --rc genhtml_branch_coverage=1 00:09:49.947 --rc genhtml_function_coverage=1 00:09:49.947 --rc genhtml_legend=1 00:09:49.947 --rc geninfo_all_blocks=1 00:09:49.947 --rc geninfo_unexecuted_blocks=1 00:09:49.947 00:09:49.947 ' 00:09:49.947 18:08:00 blockdev_nvme -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:09:49.947 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:49.947 --rc genhtml_branch_coverage=1 00:09:49.947 --rc genhtml_function_coverage=1 00:09:49.947 --rc genhtml_legend=1 00:09:49.947 --rc geninfo_all_blocks=1 00:09:49.947 --rc geninfo_unexecuted_blocks=1 00:09:49.947 00:09:49.947 ' 00:09:49.947 18:08:00 blockdev_nvme -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:09:49.947 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:49.947 --rc genhtml_branch_coverage=1 00:09:49.947 --rc genhtml_function_coverage=1 00:09:49.947 --rc genhtml_legend=1 00:09:49.947 --rc geninfo_all_blocks=1 00:09:49.947 --rc geninfo_unexecuted_blocks=1 00:09:49.947 00:09:49.947 ' 00:09:49.947 18:08:00 blockdev_nvme -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:09:49.947 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:49.947 --rc genhtml_branch_coverage=1 00:09:49.947 --rc genhtml_function_coverage=1 00:09:49.947 --rc genhtml_legend=1 00:09:49.947 --rc geninfo_all_blocks=1 00:09:49.947 --rc geninfo_unexecuted_blocks=1 00:09:49.947 00:09:49.947 ' 00:09:49.947 18:08:00 blockdev_nvme -- bdev/blockdev.sh@10 -- # source /home/vagrant/spdk_repo/spdk/test/bdev/nbd_common.sh 00:09:49.947 18:08:00 blockdev_nvme -- bdev/nbd_common.sh@6 -- # set -e 00:09:49.947 18:08:00 blockdev_nvme -- bdev/blockdev.sh@12 -- # rpc_py=rpc_cmd 00:09:49.947 18:08:00 blockdev_nvme -- bdev/blockdev.sh@13 -- # conf_file=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.json 00:09:49.947 18:08:00 blockdev_nvme -- bdev/blockdev.sh@14 -- # nonenclosed_conf_file=/home/vagrant/spdk_repo/spdk/test/bdev/nonenclosed.json 00:09:49.947 18:08:00 blockdev_nvme -- bdev/blockdev.sh@15 -- # nonarray_conf_file=/home/vagrant/spdk_repo/spdk/test/bdev/nonarray.json 00:09:49.947 18:08:00 blockdev_nvme -- bdev/blockdev.sh@17 -- # export RPC_PIPE_TIMEOUT=30 00:09:49.947 18:08:00 blockdev_nvme -- bdev/blockdev.sh@17 -- # RPC_PIPE_TIMEOUT=30 00:09:49.947 18:08:00 blockdev_nvme -- bdev/blockdev.sh@20 -- # : 00:09:49.947 18:08:00 blockdev_nvme -- bdev/blockdev.sh@707 -- # QOS_DEV_1=Malloc_0 00:09:49.947 18:08:00 blockdev_nvme -- bdev/blockdev.sh@708 -- # QOS_DEV_2=Null_1 00:09:49.947 18:08:00 blockdev_nvme -- bdev/blockdev.sh@709 -- # QOS_RUN_TIME=5 00:09:49.947 18:08:00 blockdev_nvme -- bdev/blockdev.sh@711 -- # uname -s 00:09:49.947 18:08:00 blockdev_nvme -- bdev/blockdev.sh@711 -- # '[' Linux = Linux ']' 00:09:49.947 18:08:00 blockdev_nvme -- bdev/blockdev.sh@713 -- # PRE_RESERVED_MEM=0 00:09:49.947 18:08:00 blockdev_nvme -- bdev/blockdev.sh@719 -- # test_type=nvme 00:09:49.947 18:08:00 blockdev_nvme -- bdev/blockdev.sh@720 -- # crypto_device= 00:09:49.947 18:08:00 blockdev_nvme -- bdev/blockdev.sh@721 -- # dek= 00:09:49.947 18:08:00 blockdev_nvme -- bdev/blockdev.sh@722 -- # env_ctx= 00:09:49.947 18:08:00 blockdev_nvme -- bdev/blockdev.sh@723 -- # wait_for_rpc= 00:09:49.947 18:08:00 blockdev_nvme -- bdev/blockdev.sh@724 -- # '[' -n '' ']' 00:09:49.947 18:08:00 blockdev_nvme -- bdev/blockdev.sh@727 -- # [[ nvme == bdev ]] 00:09:49.948 18:08:00 blockdev_nvme -- bdev/blockdev.sh@727 -- # [[ nvme == crypto_* ]] 00:09:49.948 18:08:00 blockdev_nvme -- bdev/blockdev.sh@730 -- # start_spdk_tgt 00:09:49.948 18:08:00 blockdev_nvme -- bdev/blockdev.sh@47 -- # spdk_tgt_pid=60938 00:09:49.948 18:08:00 blockdev_nvme -- bdev/blockdev.sh@46 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt '' '' 00:09:49.948 18:08:00 blockdev_nvme -- bdev/blockdev.sh@48 -- # trap 'killprocess "$spdk_tgt_pid"; exit 1' SIGINT SIGTERM EXIT 00:09:49.948 18:08:00 blockdev_nvme -- bdev/blockdev.sh@49 -- # waitforlisten 60938 00:09:49.948 18:08:00 blockdev_nvme -- common/autotest_common.sh@835 -- # '[' -z 60938 ']' 00:09:49.948 18:08:00 blockdev_nvme -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:09:49.948 18:08:00 blockdev_nvme -- common/autotest_common.sh@840 -- # local max_retries=100 00:09:49.948 18:08:00 blockdev_nvme -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:09:49.948 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:09:49.948 18:08:00 blockdev_nvme -- common/autotest_common.sh@844 -- # xtrace_disable 00:09:49.948 18:08:00 blockdev_nvme -- common/autotest_common.sh@10 -- # set +x 00:09:49.948 [2024-12-06 18:08:00.477658] Starting SPDK v25.01-pre git sha1 1148849d6 / DPDK 24.03.0 initialization... 00:09:49.948 [2024-12-06 18:08:00.478071] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid60938 ] 00:09:50.206 [2024-12-06 18:08:00.668792] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:50.465 [2024-12-06 18:08:00.785443] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:09:51.397 18:08:01 blockdev_nvme -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:09:51.397 18:08:01 blockdev_nvme -- common/autotest_common.sh@868 -- # return 0 00:09:51.397 18:08:01 blockdev_nvme -- bdev/blockdev.sh@731 -- # case "$test_type" in 00:09:51.397 18:08:01 blockdev_nvme -- bdev/blockdev.sh@736 -- # setup_nvme_conf 00:09:51.397 18:08:01 blockdev_nvme -- bdev/blockdev.sh@81 -- # local json 00:09:51.397 18:08:01 blockdev_nvme -- bdev/blockdev.sh@82 -- # mapfile -t json 00:09:51.397 18:08:01 blockdev_nvme -- bdev/blockdev.sh@82 -- # /home/vagrant/spdk_repo/spdk/scripts/gen_nvme.sh 00:09:51.397 18:08:01 blockdev_nvme -- bdev/blockdev.sh@83 -- # rpc_cmd load_subsystem_config -j ''\''{ "subsystem": "bdev", "config": [ { "method": "bdev_nvme_attach_controller", "params": { "trtype": "PCIe", "name":"Nvme0", "traddr":"0000:00:10.0" } },{ "method": "bdev_nvme_attach_controller", "params": { "trtype": "PCIe", "name":"Nvme1", "traddr":"0000:00:11.0" } },{ "method": "bdev_nvme_attach_controller", "params": { "trtype": "PCIe", "name":"Nvme2", "traddr":"0000:00:12.0" } },{ "method": "bdev_nvme_attach_controller", "params": { "trtype": "PCIe", "name":"Nvme3", "traddr":"0000:00:13.0" } } ] }'\''' 00:09:51.397 18:08:01 blockdev_nvme -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:51.397 18:08:01 blockdev_nvme -- common/autotest_common.sh@10 -- # set +x 00:09:51.655 18:08:02 blockdev_nvme -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:51.655 18:08:02 blockdev_nvme -- bdev/blockdev.sh@774 -- # rpc_cmd bdev_wait_for_examine 00:09:51.655 18:08:02 blockdev_nvme -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:51.655 18:08:02 blockdev_nvme -- common/autotest_common.sh@10 -- # set +x 00:09:51.655 18:08:02 blockdev_nvme -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:51.655 18:08:02 blockdev_nvme -- bdev/blockdev.sh@777 -- # cat 00:09:51.655 18:08:02 blockdev_nvme -- bdev/blockdev.sh@777 -- # rpc_cmd save_subsystem_config -n accel 00:09:51.655 18:08:02 blockdev_nvme -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:51.655 18:08:02 blockdev_nvme -- common/autotest_common.sh@10 -- # set +x 00:09:51.655 18:08:02 blockdev_nvme -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:51.655 18:08:02 blockdev_nvme -- bdev/blockdev.sh@777 -- # rpc_cmd save_subsystem_config -n bdev 00:09:51.655 18:08:02 blockdev_nvme -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:51.655 18:08:02 blockdev_nvme -- common/autotest_common.sh@10 -- # set +x 00:09:51.655 18:08:02 blockdev_nvme -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:51.655 18:08:02 blockdev_nvme -- bdev/blockdev.sh@777 -- # rpc_cmd save_subsystem_config -n iobuf 00:09:51.655 18:08:02 blockdev_nvme -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:51.655 18:08:02 blockdev_nvme -- common/autotest_common.sh@10 -- # set +x 00:09:51.655 18:08:02 blockdev_nvme -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:51.655 18:08:02 blockdev_nvme -- bdev/blockdev.sh@785 -- # mapfile -t bdevs 00:09:51.655 18:08:02 blockdev_nvme -- bdev/blockdev.sh@785 -- # rpc_cmd bdev_get_bdevs 00:09:51.655 18:08:02 blockdev_nvme -- bdev/blockdev.sh@785 -- # jq -r '.[] | select(.claimed == false)' 00:09:51.655 18:08:02 blockdev_nvme -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:51.655 18:08:02 blockdev_nvme -- common/autotest_common.sh@10 -- # set +x 00:09:51.914 18:08:02 blockdev_nvme -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:51.914 18:08:02 blockdev_nvme -- bdev/blockdev.sh@786 -- # mapfile -t bdevs_name 00:09:51.914 18:08:02 blockdev_nvme -- bdev/blockdev.sh@786 -- # jq -r .name 00:09:51.914 18:08:02 blockdev_nvme -- bdev/blockdev.sh@786 -- # printf '%s\n' '{' ' "name": "Nvme0n1",' ' "aliases": [' ' "cd522fe4-cbc5-4160-a287-78f9d88acabc"' ' ],' ' "product_name": "NVMe disk",' ' "block_size": 4096,' ' "num_blocks": 1548666,' ' "uuid": "cd522fe4-cbc5-4160-a287-78f9d88acabc",' ' "numa_id": -1,' ' "md_size": 64,' ' "md_interleave": false,' ' "dif_type": 0,' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "flush": true,' ' "reset": true,' ' "nvme_admin": true,' ' "nvme_io": true,' ' "nvme_io_md": true,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": true,' ' "compare_and_write": false,' ' "abort": true,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": true,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {' ' "nvme": [' ' {' ' "pci_address": "0000:00:10.0",' ' "trid": {' ' "trtype": "PCIe",' ' "traddr": "0000:00:10.0"' ' },' ' "ctrlr_data": {' ' "cntlid": 0,' ' "vendor_id": "0x1b36",' ' "model_number": "QEMU NVMe Ctrl",' ' "serial_number": "12340",' ' "firmware_revision": "8.0.0",' ' "subnqn": "nqn.2019-08.org.qemu:12340",' ' "oacs": {' ' "security": 0,' ' "format": 1,' ' "firmware": 0,' ' "ns_manage": 1' ' },' ' "multi_ctrlr": false,' ' "ana_reporting": false' ' },' ' "vs": {' ' "nvme_version": "1.4"' ' },' ' "ns_data": {' ' "id": 1,' ' "can_share": false' ' }' ' }' ' ],' ' "mp_policy": "active_passive"' ' }' '}' '{' ' "name": "Nvme1n1",' ' "aliases": [' ' "a5f0c8ef-0046-447e-abac-a9fb0444c5f3"' ' ],' ' "product_name": "NVMe disk",' ' "block_size": 4096,' ' "num_blocks": 1310720,' ' "uuid": "a5f0c8ef-0046-447e-abac-a9fb0444c5f3",' ' "numa_id": -1,' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "flush": true,' ' "reset": true,' ' "nvme_admin": true,' ' "nvme_io": true,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": true,' ' "compare_and_write": false,' ' "abort": true,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": true,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {' ' "nvme": [' ' {' ' "pci_address": "0000:00:11.0",' ' "trid": {' ' "trtype": "PCIe",' ' "traddr": "0000:00:11.0"' ' },' ' "ctrlr_data": {' ' "cntlid": 0,' ' "vendor_id": "0x1b36",' ' "model_number": "QEMU NVMe Ctrl",' ' "serial_number": "12341",' ' "firmware_revision": "8.0.0",' ' "subnqn": "nqn.2019-08.org.qemu:12341",' ' "oacs": {' ' "security": 0,' ' "format": 1,' ' "firmware": 0,' ' "ns_manage": 1' ' },' ' "multi_ctrlr": false,' ' "ana_reporting": false' ' },' ' "vs": {' ' "nvme_version": "1.4"' ' },' ' "ns_data": {' ' "id": 1,' ' "can_share": false' ' }' ' }' ' ],' ' "mp_policy": "active_passive"' ' }' '}' '{' ' "name": "Nvme2n1",' ' "aliases": [' ' "398a1f28-e545-4aa8-a686-029cf4b7509f"' ' ],' ' "product_name": "NVMe disk",' ' "block_size": 4096,' ' "num_blocks": 1048576,' ' "uuid": "398a1f28-e545-4aa8-a686-029cf4b7509f",' ' "numa_id": -1,' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "flush": true,' ' "reset": true,' ' "nvme_admin": true,' ' "nvme_io": true,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": true,' ' "compare_and_write": false,' ' "abort": true,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": true,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {' ' "nvme": [' ' {' ' "pci_address": "0000:00:12.0",' ' "trid": {' ' "trtype": "PCIe",' ' "traddr": "0000:00:12.0"' ' },' ' "ctrlr_data": {' ' "cntlid": 0,' ' "vendor_id": "0x1b36",' ' "model_number": "QEMU NVMe Ctrl",' ' "serial_number": "12342",' ' "firmware_revision": "8.0.0",' ' "subnqn": "nqn.2019-08.org.qemu:12342",' ' "oacs": {' ' "security": 0,' ' "format": 1,' ' "firmware": 0,' ' "ns_manage": 1' ' },' ' "multi_ctrlr": false,' ' "ana_reporting": false' ' },' ' "vs": {' ' "nvme_version": "1.4"' ' },' ' "ns_data": {' ' "id": 1,' ' "can_share": false' ' }' ' }' ' ],' ' "mp_policy": "active_passive"' ' }' '}' '{' ' "name": "Nvme2n2",' ' "aliases": [' ' "da8e8870-9bfd-4f84-8387-11bc107b5f2f"' ' ],' ' "product_name": "NVMe disk",' ' "block_size": 4096,' ' "num_blocks": 1048576,' ' "uuid": "da8e8870-9bfd-4f84-8387-11bc107b5f2f",' ' "numa_id": -1,' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "flush": true,' ' "reset": true,' ' "nvme_admin": true,' ' "nvme_io": true,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": true,' ' "compare_and_write": false,' ' "abort": true,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": true,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {' ' "nvme": [' ' {' ' "pci_address": "0000:00:12.0",' ' "trid": {' ' "trtype": "PCIe",' ' "traddr": "0000:00:12.0"' ' },' ' "ctrlr_data": {' ' "cntlid": 0,' ' "vendor_id": "0x1b36",' ' "model_number": "QEMU NVMe Ctrl",' ' "serial_number": "12342",' ' "firmware_revision": "8.0.0",' ' "subnqn": "nqn.2019-08.org.qemu:12342",' ' "oacs": {' ' "security": 0,' ' "format": 1,' ' "firmware": 0,' ' "ns_manage": 1' ' },' ' "multi_ctrlr": false,' ' "ana_reporting": false' ' },' ' "vs": {' ' "nvme_version": "1.4"' ' },' ' "ns_data": {' ' "id": 2,' ' "can_share": false' ' }' ' }' ' ],' ' "mp_policy": "active_passive"' ' }' '}' '{' ' "name": "Nvme2n3",' ' "aliases": [' ' "a7a101a6-7949-422c-8825-6678b77bae9a"' ' ],' ' "product_name": "NVMe disk",' ' "block_size": 4096,' ' "num_blocks": 1048576,' ' "uuid": "a7a101a6-7949-422c-8825-6678b77bae9a",' ' "numa_id": -1,' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "flush": true,' ' "reset": true,' ' "nvme_admin": true,' ' "nvme_io": true,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": true,' ' "compare_and_write": false,' ' "abort": true,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": true,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {' ' "nvme": [' ' {' ' "pci_address": "0000:00:12.0",' ' "trid": {' ' "trtype": "PCIe",' ' "traddr": "0000:00:12.0"' ' },' ' "ctrlr_data": {' ' "cntlid": 0,' ' "vendor_id": "0x1b36",' ' "model_number": "QEMU NVMe Ctrl",' ' "serial_number": "12342",' ' "firmware_revision": "8.0.0",' ' "subnqn": "nqn.2019-08.org.qemu:12342",' ' "oacs": {' ' "security": 0,' ' "format": 1,' ' "firmware": 0,' ' "ns_manage": 1' ' },' ' "multi_ctrlr": false,' ' "ana_reporting": false' ' },' ' "vs": {' ' "nvme_version": "1.4"' ' },' ' "ns_data": {' ' "id": 3,' ' "can_share": false' ' }' ' }' ' ],' ' "mp_policy": "active_passive"' ' }' '}' '{' ' "name": "Nvme3n1",' ' "aliases": [' ' "8456107b-c81a-48d9-b143-5647fff91f27"' ' ],' ' "product_name": "NVMe disk",' ' "block_size": 4096,' ' "num_blocks": 262144,' ' "uuid": "8456107b-c81a-48d9-b143-5647fff91f27",' ' "numa_id": -1,' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "flush": true,' ' "reset": true,' ' "nvme_admin": true,' ' "nvme_io": true,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": true,' ' "compare_and_write": false,' ' "abort": true,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": true,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {' ' "nvme": [' ' {' ' "pci_address": "0000:00:13.0",' ' "trid": {' ' "trtype": "PCIe",' ' "traddr": "0000:00:13.0"' ' },' ' "ctrlr_data": {' ' "cntlid": 0,' ' "vendor_id": "0x1b36",' ' "model_number": "QEMU NVMe Ctrl",' ' "serial_number": "12343",' ' "firmware_revision": "8.0.0",' ' "subnqn": "nqn.2019-08.org.qemu:fdp-subsys3",' ' "oacs": {' ' "security": 0,' ' "format": 1,' ' "firmware": 0,' ' "ns_manage": 1' ' },' ' "multi_ctrlr": true,' ' "ana_reporting": false' ' },' ' "vs": {' ' "nvme_version": "1.4"' ' },' ' "ns_data": {' ' "id": 1,' ' "can_share": true' ' }' ' }' ' ],' ' "mp_policy": "active_passive"' ' }' '}' 00:09:51.914 18:08:02 blockdev_nvme -- bdev/blockdev.sh@787 -- # bdev_list=("${bdevs_name[@]}") 00:09:51.914 18:08:02 blockdev_nvme -- bdev/blockdev.sh@789 -- # hello_world_bdev=Nvme0n1 00:09:51.914 18:08:02 blockdev_nvme -- bdev/blockdev.sh@790 -- # trap - SIGINT SIGTERM EXIT 00:09:51.914 18:08:02 blockdev_nvme -- bdev/blockdev.sh@791 -- # killprocess 60938 00:09:51.915 18:08:02 blockdev_nvme -- common/autotest_common.sh@954 -- # '[' -z 60938 ']' 00:09:51.915 18:08:02 blockdev_nvme -- common/autotest_common.sh@958 -- # kill -0 60938 00:09:51.915 18:08:02 blockdev_nvme -- common/autotest_common.sh@959 -- # uname 00:09:51.915 18:08:02 blockdev_nvme -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:09:51.915 18:08:02 blockdev_nvme -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 60938 00:09:51.915 killing process with pid 60938 00:09:51.915 18:08:02 blockdev_nvme -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:09:51.915 18:08:02 blockdev_nvme -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:09:51.915 18:08:02 blockdev_nvme -- common/autotest_common.sh@972 -- # echo 'killing process with pid 60938' 00:09:51.915 18:08:02 blockdev_nvme -- common/autotest_common.sh@973 -- # kill 60938 00:09:51.915 18:08:02 blockdev_nvme -- common/autotest_common.sh@978 -- # wait 60938 00:09:54.446 18:08:04 blockdev_nvme -- bdev/blockdev.sh@795 -- # trap cleanup SIGINT SIGTERM EXIT 00:09:54.446 18:08:04 blockdev_nvme -- bdev/blockdev.sh@797 -- # run_test bdev_hello_world /home/vagrant/spdk_repo/spdk/build/examples/hello_bdev --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -b Nvme0n1 '' 00:09:54.446 18:08:04 blockdev_nvme -- common/autotest_common.sh@1105 -- # '[' 7 -le 1 ']' 00:09:54.446 18:08:04 blockdev_nvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:09:54.446 18:08:04 blockdev_nvme -- common/autotest_common.sh@10 -- # set +x 00:09:54.446 ************************************ 00:09:54.446 START TEST bdev_hello_world 00:09:54.446 ************************************ 00:09:54.446 18:08:04 blockdev_nvme.bdev_hello_world -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/build/examples/hello_bdev --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -b Nvme0n1 '' 00:09:54.446 [2024-12-06 18:08:04.871076] Starting SPDK v25.01-pre git sha1 1148849d6 / DPDK 24.03.0 initialization... 00:09:54.446 [2024-12-06 18:08:04.871223] [ DPDK EAL parameters: hello_bdev --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid61033 ] 00:09:54.703 [2024-12-06 18:08:05.058788] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:54.704 [2024-12-06 18:08:05.187143] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:09:55.639 [2024-12-06 18:08:05.886900] hello_bdev.c: 222:hello_start: *NOTICE*: Successfully started the application 00:09:55.639 [2024-12-06 18:08:05.886966] hello_bdev.c: 231:hello_start: *NOTICE*: Opening the bdev Nvme0n1 00:09:55.639 [2024-12-06 18:08:05.886997] hello_bdev.c: 244:hello_start: *NOTICE*: Opening io channel 00:09:55.639 [2024-12-06 18:08:05.890318] hello_bdev.c: 138:hello_write: *NOTICE*: Writing to the bdev 00:09:55.639 [2024-12-06 18:08:05.890718] hello_bdev.c: 117:write_complete: *NOTICE*: bdev io write completed successfully 00:09:55.639 [2024-12-06 18:08:05.890755] hello_bdev.c: 84:hello_read: *NOTICE*: Reading io 00:09:55.639 [2024-12-06 18:08:05.891143] hello_bdev.c: 65:read_complete: *NOTICE*: Read string from bdev : Hello World! 00:09:55.639 00:09:55.639 [2024-12-06 18:08:05.891170] hello_bdev.c: 74:read_complete: *NOTICE*: Stopping app 00:09:56.573 00:09:56.573 real 0m2.277s 00:09:56.573 user 0m1.904s 00:09:56.573 sys 0m0.263s 00:09:56.573 ************************************ 00:09:56.573 END TEST bdev_hello_world 00:09:56.573 ************************************ 00:09:56.573 18:08:07 blockdev_nvme.bdev_hello_world -- common/autotest_common.sh@1130 -- # xtrace_disable 00:09:56.573 18:08:07 blockdev_nvme.bdev_hello_world -- common/autotest_common.sh@10 -- # set +x 00:09:56.573 18:08:07 blockdev_nvme -- bdev/blockdev.sh@798 -- # run_test bdev_bounds bdev_bounds '' 00:09:56.573 18:08:07 blockdev_nvme -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:09:56.573 18:08:07 blockdev_nvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:09:56.573 18:08:07 blockdev_nvme -- common/autotest_common.sh@10 -- # set +x 00:09:56.573 ************************************ 00:09:56.573 START TEST bdev_bounds 00:09:56.573 ************************************ 00:09:56.573 18:08:07 blockdev_nvme.bdev_bounds -- common/autotest_common.sh@1129 -- # bdev_bounds '' 00:09:56.573 18:08:07 blockdev_nvme.bdev_bounds -- bdev/blockdev.sh@289 -- # bdevio_pid=61075 00:09:56.573 18:08:07 blockdev_nvme.bdev_bounds -- bdev/blockdev.sh@288 -- # /home/vagrant/spdk_repo/spdk/test/bdev/bdevio/bdevio -w -s 0 --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json '' 00:09:56.573 Process bdevio pid: 61075 00:09:56.573 18:08:07 blockdev_nvme.bdev_bounds -- bdev/blockdev.sh@290 -- # trap 'cleanup; killprocess $bdevio_pid; exit 1' SIGINT SIGTERM EXIT 00:09:56.573 18:08:07 blockdev_nvme.bdev_bounds -- bdev/blockdev.sh@291 -- # echo 'Process bdevio pid: 61075' 00:09:56.573 18:08:07 blockdev_nvme.bdev_bounds -- bdev/blockdev.sh@292 -- # waitforlisten 61075 00:09:56.573 18:08:07 blockdev_nvme.bdev_bounds -- common/autotest_common.sh@835 -- # '[' -z 61075 ']' 00:09:56.573 18:08:07 blockdev_nvme.bdev_bounds -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:09:56.573 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:09:56.573 18:08:07 blockdev_nvme.bdev_bounds -- common/autotest_common.sh@840 -- # local max_retries=100 00:09:56.573 18:08:07 blockdev_nvme.bdev_bounds -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:09:56.573 18:08:07 blockdev_nvme.bdev_bounds -- common/autotest_common.sh@844 -- # xtrace_disable 00:09:56.573 18:08:07 blockdev_nvme.bdev_bounds -- common/autotest_common.sh@10 -- # set +x 00:09:56.830 [2024-12-06 18:08:07.221742] Starting SPDK v25.01-pre git sha1 1148849d6 / DPDK 24.03.0 initialization... 00:09:56.830 [2024-12-06 18:08:07.221871] [ DPDK EAL parameters: bdevio --no-shconf -c 0x7 -m 0 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid61075 ] 00:09:57.088 [2024-12-06 18:08:07.406519] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:09:57.088 [2024-12-06 18:08:07.534251] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:09:57.088 [2024-12-06 18:08:07.534416] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:09:57.088 [2024-12-06 18:08:07.534441] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:09:58.021 18:08:08 blockdev_nvme.bdev_bounds -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:09:58.021 18:08:08 blockdev_nvme.bdev_bounds -- common/autotest_common.sh@868 -- # return 0 00:09:58.021 18:08:08 blockdev_nvme.bdev_bounds -- bdev/blockdev.sh@293 -- # /home/vagrant/spdk_repo/spdk/test/bdev/bdevio/tests.py perform_tests 00:09:58.021 I/O targets: 00:09:58.021 Nvme0n1: 1548666 blocks of 4096 bytes (6050 MiB) 00:09:58.021 Nvme1n1: 1310720 blocks of 4096 bytes (5120 MiB) 00:09:58.021 Nvme2n1: 1048576 blocks of 4096 bytes (4096 MiB) 00:09:58.021 Nvme2n2: 1048576 blocks of 4096 bytes (4096 MiB) 00:09:58.021 Nvme2n3: 1048576 blocks of 4096 bytes (4096 MiB) 00:09:58.021 Nvme3n1: 262144 blocks of 4096 bytes (1024 MiB) 00:09:58.021 00:09:58.021 00:09:58.021 CUnit - A unit testing framework for C - Version 2.1-3 00:09:58.021 http://cunit.sourceforge.net/ 00:09:58.021 00:09:58.021 00:09:58.021 Suite: bdevio tests on: Nvme3n1 00:09:58.021 Test: blockdev write read block ...passed 00:09:58.021 Test: blockdev write zeroes read block ...passed 00:09:58.021 Test: blockdev write zeroes read no split ...passed 00:09:58.021 Test: blockdev write zeroes read split ...passed 00:09:58.021 Test: blockdev write zeroes read split partial ...passed 00:09:58.021 Test: blockdev reset ...[2024-12-06 18:08:08.461399] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [0000:00:13.0, 0] resetting controller 00:09:58.021 [2024-12-06 18:08:08.465507] bdev_nvme.c:2287:bdev_nvme_reset_ctrlr_complete: *NOTICE*: [0000:00:13.0, 0] Resetting controller spassed 00:09:58.021 Test: blockdev write read 8 blocks ...uccessful. 00:09:58.021 passed 00:09:58.021 Test: blockdev write read size > 128k ...passed 00:09:58.021 Test: blockdev write read invalid size ...passed 00:09:58.021 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:09:58.021 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:09:58.021 Test: blockdev write read max offset ...passed 00:09:58.021 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:09:58.021 Test: blockdev writev readv 8 blocks ...passed 00:09:58.021 Test: blockdev writev readv 30 x 1block ...passed 00:09:58.021 Test: blockdev writev readv block ...passed 00:09:58.021 Test: blockdev writev readv size > 128k ...passed 00:09:58.021 Test: blockdev writev readv size > 128k in two iovs ...passed 00:09:58.021 Test: blockdev comparev and writev ...[2024-12-06 18:08:08.475055] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:190 nsid:1 lba:0 len:1 SGL DATA BLOCK ADDRESS 0x2b540a000 len:0x1000 00:09:58.021 [2024-12-06 18:08:08.475112] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:190 cdw0:0 sqhd:0018 p:1 m:0 dnr:1 00:09:58.021 passed 00:09:58.021 Test: blockdev nvme passthru rw ...passed 00:09:58.021 Test: blockdev nvme passthru vendor specific ...passed 00:09:58.021 Test: blockdev nvme admin passthru ...[2024-12-06 18:08:08.475952] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:190 PRP1 0x0 PRP2 0x0 00:09:58.021 [2024-12-06 18:08:08.475999] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:190 cdw0:0 sqhd:001c p:1 m:0 dnr:1 00:09:58.021 passed 00:09:58.021 Test: blockdev copy ...passed 00:09:58.021 Suite: bdevio tests on: Nvme2n3 00:09:58.021 Test: blockdev write read block ...passed 00:09:58.021 Test: blockdev write zeroes read block ...passed 00:09:58.021 Test: blockdev write zeroes read no split ...passed 00:09:58.021 Test: blockdev write zeroes read split ...passed 00:09:58.021 Test: blockdev write zeroes read split partial ...passed 00:09:58.021 Test: blockdev reset ...[2024-12-06 18:08:08.555892] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [0000:00:12.0, 0] resetting controller 00:09:58.021 passed 00:09:58.021 Test: blockdev write read 8 blocks ...[2024-12-06 18:08:08.560154] bdev_nvme.c:2287:bdev_nvme_reset_ctrlr_complete: *NOTICE*: [0000:00:12.0, 0] Resetting controller successful. 00:09:58.021 passed 00:09:58.021 Test: blockdev write read size > 128k ...passed 00:09:58.021 Test: blockdev write read invalid size ...passed 00:09:58.021 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:09:58.021 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:09:58.021 Test: blockdev write read max offset ...passed 00:09:58.021 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:09:58.021 Test: blockdev writev readv 8 blocks ...passed 00:09:58.021 Test: blockdev writev readv 30 x 1block ...passed 00:09:58.021 Test: blockdev writev readv block ...passed 00:09:58.021 Test: blockdev writev readv size > 128k ...passed 00:09:58.021 Test: blockdev writev readv size > 128k in two iovs ...passed 00:09:58.021 Test: blockdev comparev and writev ...[2024-12-06 18:08:08.569369] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:190 nsid:3 lba:0 len:1 SGL DATA BLOCK ADDRESS 0x298606000 len:0x1000 00:09:58.021 [2024-12-06 18:08:08.569436] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:190 cdw0:0 sqhd:0018 p:1 m:0 dnr:1 00:09:58.021 passed 00:09:58.021 Test: blockdev nvme passthru rw ...passed 00:09:58.021 Test: blockdev nvme passthru vendor specific ...[2024-12-06 18:08:08.570357] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC RESERVED / VENDOR SPECIFIC qid:1 cid:190 PRP1 0x0 PRP2 0x0 00:09:58.021 passed 00:09:58.021 Test: blockdev nvme admin passthru ...[2024-12-06 18:08:08.570395] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:190 cdw0:0 sqhd:001c p:1 m:0 dnr:1 00:09:58.021 passed 00:09:58.021 Test: blockdev copy ...passed 00:09:58.021 Suite: bdevio tests on: Nvme2n2 00:09:58.021 Test: blockdev write read block ...passed 00:09:58.021 Test: blockdev write zeroes read block ...passed 00:09:58.021 Test: blockdev write zeroes read no split ...passed 00:09:58.279 Test: blockdev write zeroes read split ...passed 00:09:58.279 Test: blockdev write zeroes read split partial ...passed 00:09:58.279 Test: blockdev reset ...[2024-12-06 18:08:08.652857] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [0000:00:12.0, 0] resetting controller 00:09:58.279 [2024-12-06 18:08:08.657233] bdev_nvme.c:2287:bdev_nvme_reset_ctrlr_complete: *NOTICE*: [0000:00:12.0, 0] Resetting controller spassed 00:09:58.279 Test: blockdev write read 8 blocks ...uccessful. 00:09:58.279 passed 00:09:58.279 Test: blockdev write read size > 128k ...passed 00:09:58.279 Test: blockdev write read invalid size ...passed 00:09:58.279 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:09:58.279 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:09:58.279 Test: blockdev write read max offset ...passed 00:09:58.279 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:09:58.279 Test: blockdev writev readv 8 blocks ...passed 00:09:58.279 Test: blockdev writev readv 30 x 1block ...passed 00:09:58.279 Test: blockdev writev readv block ...passed 00:09:58.279 Test: blockdev writev readv size > 128k ...passed 00:09:58.279 Test: blockdev writev readv size > 128k in two iovs ...passed 00:09:58.279 Test: blockdev comparev and writev ...[2024-12-06 18:08:08.667366] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:190 nsid:2 lba:0 len:1 SGL DATA BLOCK ADDRESS 0x2c543c000 len:0x1000 00:09:58.279 [2024-12-06 18:08:08.667555] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:190 cdw0:0 sqhd:0018 p:1 m:0 dnr:1 00:09:58.279 passed 00:09:58.279 Test: blockdev nvme passthru rw ...passed 00:09:58.279 Test: blockdev nvme passthru vendor specific ...[2024-12-06 18:08:08.668751] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC RESERVED / VENDOR SPECIFIC qid:1 cid:190 PRP1 0x0 PRP2 0x0 00:09:58.279 [2024-12-06 18:08:08.668848] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:190 cdw0:0passed 00:09:58.279 Test: blockdev nvme admin passthru ... sqhd:001c p:1 m:0 dnr:1 00:09:58.279 passed 00:09:58.279 Test: blockdev copy ...passed 00:09:58.279 Suite: bdevio tests on: Nvme2n1 00:09:58.279 Test: blockdev write read block ...passed 00:09:58.279 Test: blockdev write zeroes read block ...passed 00:09:58.279 Test: blockdev write zeroes read no split ...passed 00:09:58.279 Test: blockdev write zeroes read split ...passed 00:09:58.279 Test: blockdev write zeroes read split partial ...passed 00:09:58.279 Test: blockdev reset ...[2024-12-06 18:08:08.747164] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [0000:00:12.0, 0] resetting controller 00:09:58.279 passed 00:09:58.279 Test: blockdev write read 8 blocks ...[2024-12-06 18:08:08.751412] bdev_nvme.c:2287:bdev_nvme_reset_ctrlr_complete: *NOTICE*: [0000:00:12.0, 0] Resetting controller successful. 00:09:58.279 passed 00:09:58.279 Test: blockdev write read size > 128k ...passed 00:09:58.279 Test: blockdev write read invalid size ...passed 00:09:58.279 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:09:58.279 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:09:58.279 Test: blockdev write read max offset ...passed 00:09:58.279 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:09:58.280 Test: blockdev writev readv 8 blocks ...passed 00:09:58.280 Test: blockdev writev readv 30 x 1block ...passed 00:09:58.280 Test: blockdev writev readv block ...passed 00:09:58.280 Test: blockdev writev readv size > 128k ...passed 00:09:58.280 Test: blockdev writev readv size > 128k in two iovs ...passed 00:09:58.280 Test: blockdev comparev and writev ...[2024-12-06 18:08:08.760649] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:190 nsid:1 lba:0 len:1 SGL DATA BLOCK ADDRESS 0x2c5438000 len:0x1000 00:09:58.280 [2024-12-06 18:08:08.760704] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:190 cdw0:0 sqhd:0018 p:1 m:0 dnr:1 00:09:58.280 passed 00:09:58.280 Test: blockdev nvme passthru rw ...passed 00:09:58.280 Test: blockdev nvme passthru vendor specific ...[2024-12-06 18:08:08.761568] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:190 PRP1 0x0 PRP2 0x0 00:09:58.280 [2024-12-06 18:08:08.761603] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:190 cdw0:0 sqhd:001c p:1 m:0 dnr:1 00:09:58.280 passed 00:09:58.280 Test: blockdev nvme admin passthru ...passed 00:09:58.280 Test: blockdev copy ...passed 00:09:58.280 Suite: bdevio tests on: Nvme1n1 00:09:58.280 Test: blockdev write read block ...passed 00:09:58.280 Test: blockdev write zeroes read block ...passed 00:09:58.280 Test: blockdev write zeroes read no split ...passed 00:09:58.280 Test: blockdev write zeroes read split ...passed 00:09:58.280 Test: blockdev write zeroes read split partial ...passed 00:09:58.280 Test: blockdev reset ...[2024-12-06 18:08:08.841467] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [0000:00:11.0, 0] resetting controller 00:09:58.280 passed 00:09:58.280 Test: blockdev write read 8 blocks ...[2024-12-06 18:08:08.845517] bdev_nvme.c:2287:bdev_nvme_reset_ctrlr_complete: *NOTICE*: [0000:00:11.0, 0] Resetting controller successful. 00:09:58.280 passed 00:09:58.280 Test: blockdev write read size > 128k ...passed 00:09:58.280 Test: blockdev write read invalid size ...passed 00:09:58.280 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:09:58.280 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:09:58.280 Test: blockdev write read max offset ...passed 00:09:58.280 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:09:58.280 Test: blockdev writev readv 8 blocks ...passed 00:09:58.280 Test: blockdev writev readv 30 x 1block ...passed 00:09:58.280 Test: blockdev writev readv block ...passed 00:09:58.280 Test: blockdev writev readv size > 128k ...passed 00:09:58.280 Test: blockdev writev readv size > 128k in two iovs ...passed 00:09:58.280 Test: blockdev comparev and writev ...[2024-12-06 18:08:08.853963] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:190 nsid:1 lba:0 len:1 SGL DATA BLOCK ADDRESS 0x2c5434000 len:0x1000 00:09:58.280 [2024-12-06 18:08:08.854016] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:190 cdw0:0 sqhd:0018 p:1 m:0 dnr:1 00:09:58.280 passed 00:09:58.537 Test: blockdev nvme passthru rw ...passed 00:09:58.537 Test: blockdev nvme passthru vendor specific ...passed 00:09:58.537 Test: blockdev nvme admin passthru ...[2024-12-06 18:08:08.854866] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:190 PRP1 0x0 PRP2 0x0 00:09:58.537 [2024-12-06 18:08:08.854910] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:190 cdw0:0 sqhd:001c p:1 m:0 dnr:1 00:09:58.537 passed 00:09:58.537 Test: blockdev copy ...passed 00:09:58.537 Suite: bdevio tests on: Nvme0n1 00:09:58.537 Test: blockdev write read block ...passed 00:09:58.537 Test: blockdev write zeroes read block ...passed 00:09:58.537 Test: blockdev write zeroes read no split ...passed 00:09:58.537 Test: blockdev write zeroes read split ...passed 00:09:58.537 Test: blockdev write zeroes read split partial ...passed 00:09:58.537 Test: blockdev reset ...[2024-12-06 18:08:08.934625] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [0000:00:10.0, 0] resetting controller 00:09:58.537 [2024-12-06 18:08:08.938728] bdev_nvme.c:2287:bdev_nvme_reset_ctrlr_complete: *NOTICE*: [0000:00:10.0, 0] Resetting controller spassed 00:09:58.537 Test: blockdev write read 8 blocks ...uccessful. 00:09:58.537 passed 00:09:58.537 Test: blockdev write read size > 128k ...passed 00:09:58.537 Test: blockdev write read invalid size ...passed 00:09:58.537 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:09:58.537 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:09:58.537 Test: blockdev write read max offset ...passed 00:09:58.537 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:09:58.537 Test: blockdev writev readv 8 blocks ...passed 00:09:58.537 Test: blockdev writev readv 30 x 1block ...passed 00:09:58.537 Test: blockdev writev readv block ...passed 00:09:58.537 Test: blockdev writev readv size > 128k ...passed 00:09:58.537 Test: blockdev writev readv size > 128k in two iovs ...passed 00:09:58.537 Test: blockdev comparev and writev ...passed[2024-12-06 18:08:08.947313] bdevio.c: 727:blockdev_comparev_and_writev: *ERROR*: skipping comparev_and_writev on bdev Nvme0n1 since it has 00:09:58.537 separate metadata which is not supported yet. 00:09:58.537 00:09:58.537 Test: blockdev nvme passthru rw ...passed 00:09:58.537 Test: blockdev nvme passthru vendor specific ...[2024-12-06 18:08:08.948104] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:191 PRP1 0x0 PRP2 0x0 00:09:58.537 [2024-12-06 18:08:08.948282] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:191 cdw0:0passed 00:09:58.537 Test: blockdev nvme admin passthru ... sqhd:0017 p:1 m:0 dnr:1 00:09:58.537 passed 00:09:58.537 Test: blockdev copy ...passed 00:09:58.537 00:09:58.537 Run Summary: Type Total Ran Passed Failed Inactive 00:09:58.538 suites 6 6 n/a 0 0 00:09:58.538 tests 138 138 138 0 0 00:09:58.538 asserts 893 893 893 0 n/a 00:09:58.538 00:09:58.538 Elapsed time = 1.527 seconds 00:09:58.538 0 00:09:58.538 18:08:08 blockdev_nvme.bdev_bounds -- bdev/blockdev.sh@294 -- # killprocess 61075 00:09:58.538 18:08:08 blockdev_nvme.bdev_bounds -- common/autotest_common.sh@954 -- # '[' -z 61075 ']' 00:09:58.538 18:08:08 blockdev_nvme.bdev_bounds -- common/autotest_common.sh@958 -- # kill -0 61075 00:09:58.538 18:08:08 blockdev_nvme.bdev_bounds -- common/autotest_common.sh@959 -- # uname 00:09:58.538 18:08:08 blockdev_nvme.bdev_bounds -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:09:58.538 18:08:08 blockdev_nvme.bdev_bounds -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 61075 00:09:58.538 killing process with pid 61075 00:09:58.538 18:08:09 blockdev_nvme.bdev_bounds -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:09:58.538 18:08:09 blockdev_nvme.bdev_bounds -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:09:58.538 18:08:09 blockdev_nvme.bdev_bounds -- common/autotest_common.sh@972 -- # echo 'killing process with pid 61075' 00:09:58.538 18:08:09 blockdev_nvme.bdev_bounds -- common/autotest_common.sh@973 -- # kill 61075 00:09:58.538 18:08:09 blockdev_nvme.bdev_bounds -- common/autotest_common.sh@978 -- # wait 61075 00:09:59.909 ************************************ 00:09:59.909 END TEST bdev_bounds 00:09:59.909 ************************************ 00:09:59.909 18:08:10 blockdev_nvme.bdev_bounds -- bdev/blockdev.sh@295 -- # trap - SIGINT SIGTERM EXIT 00:09:59.909 00:09:59.909 real 0m2.953s 00:09:59.909 user 0m7.631s 00:09:59.909 sys 0m0.415s 00:09:59.909 18:08:10 blockdev_nvme.bdev_bounds -- common/autotest_common.sh@1130 -- # xtrace_disable 00:09:59.909 18:08:10 blockdev_nvme.bdev_bounds -- common/autotest_common.sh@10 -- # set +x 00:09:59.909 18:08:10 blockdev_nvme -- bdev/blockdev.sh@799 -- # run_test bdev_nbd nbd_function_test /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json 'Nvme0n1 Nvme1n1 Nvme2n1 Nvme2n2 Nvme2n3 Nvme3n1' '' 00:09:59.909 18:08:10 blockdev_nvme -- common/autotest_common.sh@1105 -- # '[' 5 -le 1 ']' 00:09:59.909 18:08:10 blockdev_nvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:09:59.909 18:08:10 blockdev_nvme -- common/autotest_common.sh@10 -- # set +x 00:09:59.909 ************************************ 00:09:59.909 START TEST bdev_nbd 00:09:59.909 ************************************ 00:09:59.909 18:08:10 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@1129 -- # nbd_function_test /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json 'Nvme0n1 Nvme1n1 Nvme2n1 Nvme2n2 Nvme2n3 Nvme3n1' '' 00:09:59.909 18:08:10 blockdev_nvme.bdev_nbd -- bdev/blockdev.sh@299 -- # uname -s 00:09:59.909 18:08:10 blockdev_nvme.bdev_nbd -- bdev/blockdev.sh@299 -- # [[ Linux == Linux ]] 00:09:59.909 18:08:10 blockdev_nvme.bdev_nbd -- bdev/blockdev.sh@301 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:09:59.909 18:08:10 blockdev_nvme.bdev_nbd -- bdev/blockdev.sh@302 -- # local conf=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.json 00:09:59.909 18:08:10 blockdev_nvme.bdev_nbd -- bdev/blockdev.sh@303 -- # bdev_all=('Nvme0n1' 'Nvme1n1' 'Nvme2n1' 'Nvme2n2' 'Nvme2n3' 'Nvme3n1') 00:09:59.909 18:08:10 blockdev_nvme.bdev_nbd -- bdev/blockdev.sh@303 -- # local bdev_all 00:09:59.909 18:08:10 blockdev_nvme.bdev_nbd -- bdev/blockdev.sh@304 -- # local bdev_num=6 00:09:59.909 18:08:10 blockdev_nvme.bdev_nbd -- bdev/blockdev.sh@308 -- # [[ -e /sys/module/nbd ]] 00:09:59.909 18:08:10 blockdev_nvme.bdev_nbd -- bdev/blockdev.sh@310 -- # nbd_all=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13' '/dev/nbd14' '/dev/nbd15' '/dev/nbd2' '/dev/nbd3' '/dev/nbd4' '/dev/nbd5' '/dev/nbd6' '/dev/nbd7' '/dev/nbd8' '/dev/nbd9') 00:09:59.909 18:08:10 blockdev_nvme.bdev_nbd -- bdev/blockdev.sh@310 -- # local nbd_all 00:09:59.909 18:08:10 blockdev_nvme.bdev_nbd -- bdev/blockdev.sh@311 -- # bdev_num=6 00:09:59.909 18:08:10 blockdev_nvme.bdev_nbd -- bdev/blockdev.sh@313 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13') 00:09:59.909 18:08:10 blockdev_nvme.bdev_nbd -- bdev/blockdev.sh@313 -- # local nbd_list 00:09:59.909 18:08:10 blockdev_nvme.bdev_nbd -- bdev/blockdev.sh@314 -- # bdev_list=('Nvme0n1' 'Nvme1n1' 'Nvme2n1' 'Nvme2n2' 'Nvme2n3' 'Nvme3n1') 00:09:59.909 18:08:10 blockdev_nvme.bdev_nbd -- bdev/blockdev.sh@314 -- # local bdev_list 00:09:59.909 18:08:10 blockdev_nvme.bdev_nbd -- bdev/blockdev.sh@317 -- # nbd_pid=61140 00:09:59.909 18:08:10 blockdev_nvme.bdev_nbd -- bdev/blockdev.sh@316 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -r /var/tmp/spdk-nbd.sock -i 0 --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json '' 00:09:59.909 18:08:10 blockdev_nvme.bdev_nbd -- bdev/blockdev.sh@318 -- # trap 'cleanup; killprocess $nbd_pid' SIGINT SIGTERM EXIT 00:09:59.909 18:08:10 blockdev_nvme.bdev_nbd -- bdev/blockdev.sh@319 -- # waitforlisten 61140 /var/tmp/spdk-nbd.sock 00:09:59.909 18:08:10 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@835 -- # '[' -z 61140 ']' 00:09:59.909 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:09:59.909 18:08:10 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:09:59.909 18:08:10 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@840 -- # local max_retries=100 00:09:59.909 18:08:10 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:09:59.909 18:08:10 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@844 -- # xtrace_disable 00:09:59.909 18:08:10 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@10 -- # set +x 00:09:59.909 [2024-12-06 18:08:10.263900] Starting SPDK v25.01-pre git sha1 1148849d6 / DPDK 24.03.0 initialization... 00:09:59.909 [2024-12-06 18:08:10.264018] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:09:59.909 [2024-12-06 18:08:10.446825] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:10:00.166 [2024-12-06 18:08:10.566097] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:10:00.733 18:08:11 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:10:00.733 18:08:11 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@868 -- # return 0 00:10:00.733 18:08:11 blockdev_nvme.bdev_nbd -- bdev/blockdev.sh@321 -- # nbd_rpc_start_stop_verify /var/tmp/spdk-nbd.sock 'Nvme0n1 Nvme1n1 Nvme2n1 Nvme2n2 Nvme2n3 Nvme3n1' 00:10:00.733 18:08:11 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@113 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:10:00.733 18:08:11 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@114 -- # bdev_list=('Nvme0n1' 'Nvme1n1' 'Nvme2n1' 'Nvme2n2' 'Nvme2n3' 'Nvme3n1') 00:10:00.733 18:08:11 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@114 -- # local bdev_list 00:10:00.733 18:08:11 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@116 -- # nbd_start_disks_without_nbd_idx /var/tmp/spdk-nbd.sock 'Nvme0n1 Nvme1n1 Nvme2n1 Nvme2n2 Nvme2n3 Nvme3n1' 00:10:00.733 18:08:11 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@22 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:10:00.733 18:08:11 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@23 -- # bdev_list=('Nvme0n1' 'Nvme1n1' 'Nvme2n1' 'Nvme2n2' 'Nvme2n3' 'Nvme3n1') 00:10:00.733 18:08:11 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@23 -- # local bdev_list 00:10:00.733 18:08:11 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@24 -- # local i 00:10:00.733 18:08:11 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@25 -- # local nbd_device 00:10:00.733 18:08:11 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i = 0 )) 00:10:00.733 18:08:11 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 6 )) 00:10:00.733 18:08:11 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme0n1 00:10:01.001 18:08:11 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd0 00:10:01.001 18:08:11 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@30 -- # basename /dev/nbd0 00:10:01.001 18:08:11 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@30 -- # waitfornbd nbd0 00:10:01.001 18:08:11 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@872 -- # local nbd_name=nbd0 00:10:01.001 18:08:11 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@873 -- # local i 00:10:01.001 18:08:11 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:10:01.001 18:08:11 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:10:01.001 18:08:11 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@876 -- # grep -q -w nbd0 /proc/partitions 00:10:01.001 18:08:11 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@877 -- # break 00:10:01.001 18:08:11 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:10:01.002 18:08:11 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:10:01.002 18:08:11 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@889 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:10:01.260 1+0 records in 00:10:01.260 1+0 records out 00:10:01.260 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000604829 s, 6.8 MB/s 00:10:01.260 18:08:11 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:10:01.260 18:08:11 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@890 -- # size=4096 00:10:01.260 18:08:11 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:10:01.260 18:08:11 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:10:01.260 18:08:11 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@893 -- # return 0 00:10:01.260 18:08:11 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:10:01.260 18:08:11 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 6 )) 00:10:01.260 18:08:11 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme1n1 00:10:01.260 18:08:11 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd1 00:10:01.260 18:08:11 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@30 -- # basename /dev/nbd1 00:10:01.260 18:08:11 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@30 -- # waitfornbd nbd1 00:10:01.260 18:08:11 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@872 -- # local nbd_name=nbd1 00:10:01.260 18:08:11 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@873 -- # local i 00:10:01.260 18:08:11 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:10:01.260 18:08:11 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:10:01.260 18:08:11 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@876 -- # grep -q -w nbd1 /proc/partitions 00:10:01.260 18:08:11 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@877 -- # break 00:10:01.260 18:08:11 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:10:01.260 18:08:11 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:10:01.260 18:08:11 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@889 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:10:01.260 1+0 records in 00:10:01.260 1+0 records out 00:10:01.260 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000445674 s, 9.2 MB/s 00:10:01.518 18:08:11 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:10:01.518 18:08:11 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@890 -- # size=4096 00:10:01.518 18:08:11 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:10:01.518 18:08:11 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:10:01.518 18:08:11 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@893 -- # return 0 00:10:01.518 18:08:11 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:10:01.518 18:08:11 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 6 )) 00:10:01.518 18:08:11 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme2n1 00:10:01.518 18:08:12 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd2 00:10:01.518 18:08:12 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@30 -- # basename /dev/nbd2 00:10:01.776 18:08:12 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@30 -- # waitfornbd nbd2 00:10:01.776 18:08:12 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@872 -- # local nbd_name=nbd2 00:10:01.776 18:08:12 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@873 -- # local i 00:10:01.776 18:08:12 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:10:01.776 18:08:12 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:10:01.776 18:08:12 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@876 -- # grep -q -w nbd2 /proc/partitions 00:10:01.776 18:08:12 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@877 -- # break 00:10:01.776 18:08:12 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:10:01.776 18:08:12 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:10:01.776 18:08:12 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@889 -- # dd if=/dev/nbd2 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:10:01.776 1+0 records in 00:10:01.776 1+0 records out 00:10:01.776 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000761627 s, 5.4 MB/s 00:10:01.776 18:08:12 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:10:01.776 18:08:12 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@890 -- # size=4096 00:10:01.776 18:08:12 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:10:01.776 18:08:12 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:10:01.776 18:08:12 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@893 -- # return 0 00:10:01.776 18:08:12 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:10:01.776 18:08:12 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 6 )) 00:10:01.776 18:08:12 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme2n2 00:10:01.776 18:08:12 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd3 00:10:01.776 18:08:12 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@30 -- # basename /dev/nbd3 00:10:02.034 18:08:12 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@30 -- # waitfornbd nbd3 00:10:02.034 18:08:12 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@872 -- # local nbd_name=nbd3 00:10:02.034 18:08:12 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@873 -- # local i 00:10:02.034 18:08:12 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:10:02.034 18:08:12 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:10:02.034 18:08:12 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@876 -- # grep -q -w nbd3 /proc/partitions 00:10:02.034 18:08:12 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@877 -- # break 00:10:02.034 18:08:12 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:10:02.034 18:08:12 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:10:02.034 18:08:12 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@889 -- # dd if=/dev/nbd3 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:10:02.034 1+0 records in 00:10:02.034 1+0 records out 00:10:02.034 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000597667 s, 6.9 MB/s 00:10:02.034 18:08:12 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:10:02.034 18:08:12 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@890 -- # size=4096 00:10:02.034 18:08:12 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:10:02.034 18:08:12 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:10:02.034 18:08:12 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@893 -- # return 0 00:10:02.034 18:08:12 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:10:02.034 18:08:12 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 6 )) 00:10:02.034 18:08:12 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme2n3 00:10:02.292 18:08:12 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd4 00:10:02.292 18:08:12 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@30 -- # basename /dev/nbd4 00:10:02.292 18:08:12 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@30 -- # waitfornbd nbd4 00:10:02.292 18:08:12 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@872 -- # local nbd_name=nbd4 00:10:02.292 18:08:12 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@873 -- # local i 00:10:02.292 18:08:12 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:10:02.292 18:08:12 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:10:02.292 18:08:12 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@876 -- # grep -q -w nbd4 /proc/partitions 00:10:02.292 18:08:12 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@877 -- # break 00:10:02.292 18:08:12 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:10:02.292 18:08:12 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:10:02.292 18:08:12 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@889 -- # dd if=/dev/nbd4 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:10:02.292 1+0 records in 00:10:02.292 1+0 records out 00:10:02.292 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000583244 s, 7.0 MB/s 00:10:02.292 18:08:12 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:10:02.292 18:08:12 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@890 -- # size=4096 00:10:02.292 18:08:12 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:10:02.292 18:08:12 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:10:02.292 18:08:12 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@893 -- # return 0 00:10:02.292 18:08:12 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:10:02.292 18:08:12 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 6 )) 00:10:02.292 18:08:12 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme3n1 00:10:02.550 18:08:12 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd5 00:10:02.550 18:08:12 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@30 -- # basename /dev/nbd5 00:10:02.550 18:08:12 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@30 -- # waitfornbd nbd5 00:10:02.550 18:08:12 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@872 -- # local nbd_name=nbd5 00:10:02.550 18:08:12 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@873 -- # local i 00:10:02.550 18:08:12 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:10:02.551 18:08:12 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:10:02.551 18:08:12 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@876 -- # grep -q -w nbd5 /proc/partitions 00:10:02.551 18:08:12 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@877 -- # break 00:10:02.551 18:08:12 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:10:02.551 18:08:12 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:10:02.551 18:08:12 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@889 -- # dd if=/dev/nbd5 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:10:02.551 1+0 records in 00:10:02.551 1+0 records out 00:10:02.551 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.00070994 s, 5.8 MB/s 00:10:02.551 18:08:12 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:10:02.551 18:08:12 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@890 -- # size=4096 00:10:02.551 18:08:12 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:10:02.551 18:08:12 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:10:02.551 18:08:12 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@893 -- # return 0 00:10:02.551 18:08:12 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:10:02.551 18:08:12 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 6 )) 00:10:02.551 18:08:12 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@118 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:10:02.809 18:08:13 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@118 -- # nbd_disks_json='[ 00:10:02.809 { 00:10:02.809 "nbd_device": "/dev/nbd0", 00:10:02.809 "bdev_name": "Nvme0n1" 00:10:02.809 }, 00:10:02.809 { 00:10:02.809 "nbd_device": "/dev/nbd1", 00:10:02.809 "bdev_name": "Nvme1n1" 00:10:02.809 }, 00:10:02.809 { 00:10:02.809 "nbd_device": "/dev/nbd2", 00:10:02.809 "bdev_name": "Nvme2n1" 00:10:02.809 }, 00:10:02.809 { 00:10:02.809 "nbd_device": "/dev/nbd3", 00:10:02.809 "bdev_name": "Nvme2n2" 00:10:02.809 }, 00:10:02.809 { 00:10:02.809 "nbd_device": "/dev/nbd4", 00:10:02.809 "bdev_name": "Nvme2n3" 00:10:02.809 }, 00:10:02.809 { 00:10:02.809 "nbd_device": "/dev/nbd5", 00:10:02.809 "bdev_name": "Nvme3n1" 00:10:02.809 } 00:10:02.809 ]' 00:10:02.809 18:08:13 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@119 -- # nbd_disks_name=($(echo "${nbd_disks_json}" | jq -r '.[] | .nbd_device')) 00:10:02.809 18:08:13 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@119 -- # echo '[ 00:10:02.809 { 00:10:02.809 "nbd_device": "/dev/nbd0", 00:10:02.809 "bdev_name": "Nvme0n1" 00:10:02.809 }, 00:10:02.809 { 00:10:02.809 "nbd_device": "/dev/nbd1", 00:10:02.809 "bdev_name": "Nvme1n1" 00:10:02.809 }, 00:10:02.809 { 00:10:02.809 "nbd_device": "/dev/nbd2", 00:10:02.809 "bdev_name": "Nvme2n1" 00:10:02.809 }, 00:10:02.809 { 00:10:02.809 "nbd_device": "/dev/nbd3", 00:10:02.809 "bdev_name": "Nvme2n2" 00:10:02.809 }, 00:10:02.809 { 00:10:02.809 "nbd_device": "/dev/nbd4", 00:10:02.809 "bdev_name": "Nvme2n3" 00:10:02.809 }, 00:10:02.809 { 00:10:02.809 "nbd_device": "/dev/nbd5", 00:10:02.809 "bdev_name": "Nvme3n1" 00:10:02.809 } 00:10:02.809 ]' 00:10:02.809 18:08:13 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@119 -- # jq -r '.[] | .nbd_device' 00:10:02.809 18:08:13 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@120 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1 /dev/nbd2 /dev/nbd3 /dev/nbd4 /dev/nbd5' 00:10:02.809 18:08:13 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:10:02.809 18:08:13 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd2' '/dev/nbd3' '/dev/nbd4' '/dev/nbd5') 00:10:02.809 18:08:13 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@50 -- # local nbd_list 00:10:02.809 18:08:13 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@51 -- # local i 00:10:02.809 18:08:13 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:10:02.809 18:08:13 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:10:03.069 18:08:13 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:10:03.069 18:08:13 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:10:03.069 18:08:13 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:10:03.069 18:08:13 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:10:03.069 18:08:13 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:10:03.069 18:08:13 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:10:03.069 18:08:13 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:10:03.069 18:08:13 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:10:03.069 18:08:13 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:10:03.069 18:08:13 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:10:03.327 18:08:13 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:10:03.327 18:08:13 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:10:03.327 18:08:13 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:10:03.327 18:08:13 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:10:03.327 18:08:13 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:10:03.327 18:08:13 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:10:03.327 18:08:13 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:10:03.327 18:08:13 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:10:03.327 18:08:13 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:10:03.327 18:08:13 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd2 00:10:03.586 18:08:13 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd2 00:10:03.586 18:08:13 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd2 00:10:03.586 18:08:13 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd2 00:10:03.586 18:08:13 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:10:03.586 18:08:13 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:10:03.586 18:08:13 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd2 /proc/partitions 00:10:03.586 18:08:13 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:10:03.586 18:08:13 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:10:03.586 18:08:13 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:10:03.586 18:08:13 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd3 00:10:03.845 18:08:14 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd3 00:10:03.845 18:08:14 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd3 00:10:03.845 18:08:14 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd3 00:10:03.845 18:08:14 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:10:03.845 18:08:14 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:10:03.845 18:08:14 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd3 /proc/partitions 00:10:03.845 18:08:14 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:10:03.845 18:08:14 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:10:03.845 18:08:14 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:10:03.845 18:08:14 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd4 00:10:04.105 18:08:14 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd4 00:10:04.105 18:08:14 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd4 00:10:04.105 18:08:14 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd4 00:10:04.105 18:08:14 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:10:04.105 18:08:14 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:10:04.105 18:08:14 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd4 /proc/partitions 00:10:04.105 18:08:14 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:10:04.105 18:08:14 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:10:04.105 18:08:14 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:10:04.105 18:08:14 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd5 00:10:04.364 18:08:14 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd5 00:10:04.364 18:08:14 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd5 00:10:04.364 18:08:14 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd5 00:10:04.364 18:08:14 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:10:04.364 18:08:14 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:10:04.364 18:08:14 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd5 /proc/partitions 00:10:04.364 18:08:14 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:10:04.364 18:08:14 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:10:04.364 18:08:14 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@122 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:10:04.364 18:08:14 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:10:04.364 18:08:14 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:10:04.624 18:08:14 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:10:04.624 18:08:14 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@64 -- # echo '[]' 00:10:04.624 18:08:14 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:10:04.624 18:08:14 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:10:04.624 18:08:14 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@65 -- # echo '' 00:10:04.624 18:08:14 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:10:04.624 18:08:15 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@65 -- # true 00:10:04.624 18:08:15 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@65 -- # count=0 00:10:04.624 18:08:15 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@66 -- # echo 0 00:10:04.624 18:08:15 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@122 -- # count=0 00:10:04.624 18:08:15 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@123 -- # '[' 0 -ne 0 ']' 00:10:04.624 18:08:15 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@127 -- # return 0 00:10:04.624 18:08:15 blockdev_nvme.bdev_nbd -- bdev/blockdev.sh@322 -- # nbd_rpc_data_verify /var/tmp/spdk-nbd.sock 'Nvme0n1 Nvme1n1 Nvme2n1 Nvme2n2 Nvme2n3 Nvme3n1' '/dev/nbd0 /dev/nbd1 /dev/nbd10 /dev/nbd11 /dev/nbd12 /dev/nbd13' 00:10:04.624 18:08:15 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@90 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:10:04.624 18:08:15 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@91 -- # bdev_list=('Nvme0n1' 'Nvme1n1' 'Nvme2n1' 'Nvme2n2' 'Nvme2n3' 'Nvme3n1') 00:10:04.624 18:08:15 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@91 -- # local bdev_list 00:10:04.624 18:08:15 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@92 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13') 00:10:04.624 18:08:15 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@92 -- # local nbd_list 00:10:04.624 18:08:15 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@94 -- # nbd_start_disks /var/tmp/spdk-nbd.sock 'Nvme0n1 Nvme1n1 Nvme2n1 Nvme2n2 Nvme2n3 Nvme3n1' '/dev/nbd0 /dev/nbd1 /dev/nbd10 /dev/nbd11 /dev/nbd12 /dev/nbd13' 00:10:04.624 18:08:15 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:10:04.624 18:08:15 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@10 -- # bdev_list=('Nvme0n1' 'Nvme1n1' 'Nvme2n1' 'Nvme2n2' 'Nvme2n3' 'Nvme3n1') 00:10:04.624 18:08:15 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@10 -- # local bdev_list 00:10:04.624 18:08:15 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13') 00:10:04.624 18:08:15 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@11 -- # local nbd_list 00:10:04.624 18:08:15 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@12 -- # local i 00:10:04.624 18:08:15 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:10:04.624 18:08:15 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 6 )) 00:10:04.624 18:08:15 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme0n1 /dev/nbd0 00:10:04.882 /dev/nbd0 00:10:04.882 18:08:15 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:10:04.882 18:08:15 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:10:04.883 18:08:15 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@872 -- # local nbd_name=nbd0 00:10:04.883 18:08:15 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@873 -- # local i 00:10:04.883 18:08:15 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:10:04.883 18:08:15 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:10:04.883 18:08:15 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@876 -- # grep -q -w nbd0 /proc/partitions 00:10:04.883 18:08:15 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@877 -- # break 00:10:04.883 18:08:15 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:10:04.883 18:08:15 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:10:04.883 18:08:15 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@889 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:10:04.883 1+0 records in 00:10:04.883 1+0 records out 00:10:04.883 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000582985 s, 7.0 MB/s 00:10:04.883 18:08:15 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:10:04.883 18:08:15 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@890 -- # size=4096 00:10:04.883 18:08:15 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:10:04.883 18:08:15 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:10:04.883 18:08:15 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@893 -- # return 0 00:10:04.883 18:08:15 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:10:04.883 18:08:15 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 6 )) 00:10:04.883 18:08:15 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme1n1 /dev/nbd1 00:10:05.141 /dev/nbd1 00:10:05.141 18:08:15 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:10:05.141 18:08:15 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:10:05.141 18:08:15 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@872 -- # local nbd_name=nbd1 00:10:05.141 18:08:15 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@873 -- # local i 00:10:05.141 18:08:15 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:10:05.141 18:08:15 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:10:05.141 18:08:15 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@876 -- # grep -q -w nbd1 /proc/partitions 00:10:05.141 18:08:15 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@877 -- # break 00:10:05.141 18:08:15 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:10:05.141 18:08:15 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:10:05.141 18:08:15 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@889 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:10:05.141 1+0 records in 00:10:05.141 1+0 records out 00:10:05.141 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000679773 s, 6.0 MB/s 00:10:05.141 18:08:15 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:10:05.141 18:08:15 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@890 -- # size=4096 00:10:05.141 18:08:15 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:10:05.141 18:08:15 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:10:05.141 18:08:15 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@893 -- # return 0 00:10:05.141 18:08:15 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:10:05.141 18:08:15 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 6 )) 00:10:05.141 18:08:15 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme2n1 /dev/nbd10 00:10:05.399 /dev/nbd10 00:10:05.399 18:08:15 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@17 -- # basename /dev/nbd10 00:10:05.399 18:08:15 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@17 -- # waitfornbd nbd10 00:10:05.399 18:08:15 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@872 -- # local nbd_name=nbd10 00:10:05.399 18:08:15 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@873 -- # local i 00:10:05.399 18:08:15 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:10:05.400 18:08:15 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:10:05.400 18:08:15 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@876 -- # grep -q -w nbd10 /proc/partitions 00:10:05.400 18:08:15 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@877 -- # break 00:10:05.400 18:08:15 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:10:05.400 18:08:15 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:10:05.400 18:08:15 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@889 -- # dd if=/dev/nbd10 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:10:05.400 1+0 records in 00:10:05.400 1+0 records out 00:10:05.400 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000563974 s, 7.3 MB/s 00:10:05.400 18:08:15 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:10:05.400 18:08:15 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@890 -- # size=4096 00:10:05.400 18:08:15 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:10:05.400 18:08:15 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:10:05.400 18:08:15 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@893 -- # return 0 00:10:05.400 18:08:15 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:10:05.400 18:08:15 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 6 )) 00:10:05.400 18:08:15 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme2n2 /dev/nbd11 00:10:05.659 /dev/nbd11 00:10:05.659 18:08:16 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@17 -- # basename /dev/nbd11 00:10:05.659 18:08:16 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@17 -- # waitfornbd nbd11 00:10:05.659 18:08:16 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@872 -- # local nbd_name=nbd11 00:10:05.659 18:08:16 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@873 -- # local i 00:10:05.659 18:08:16 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:10:05.659 18:08:16 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:10:05.659 18:08:16 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@876 -- # grep -q -w nbd11 /proc/partitions 00:10:05.659 18:08:16 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@877 -- # break 00:10:05.659 18:08:16 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:10:05.659 18:08:16 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:10:05.659 18:08:16 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@889 -- # dd if=/dev/nbd11 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:10:05.659 1+0 records in 00:10:05.659 1+0 records out 00:10:05.659 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000586119 s, 7.0 MB/s 00:10:05.659 18:08:16 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:10:05.659 18:08:16 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@890 -- # size=4096 00:10:05.659 18:08:16 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:10:05.659 18:08:16 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:10:05.659 18:08:16 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@893 -- # return 0 00:10:05.659 18:08:16 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:10:05.659 18:08:16 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 6 )) 00:10:05.659 18:08:16 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme2n3 /dev/nbd12 00:10:05.918 /dev/nbd12 00:10:05.918 18:08:16 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@17 -- # basename /dev/nbd12 00:10:05.918 18:08:16 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@17 -- # waitfornbd nbd12 00:10:05.918 18:08:16 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@872 -- # local nbd_name=nbd12 00:10:05.918 18:08:16 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@873 -- # local i 00:10:05.918 18:08:16 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:10:05.918 18:08:16 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:10:05.918 18:08:16 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@876 -- # grep -q -w nbd12 /proc/partitions 00:10:05.918 18:08:16 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@877 -- # break 00:10:05.918 18:08:16 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:10:05.918 18:08:16 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:10:05.918 18:08:16 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@889 -- # dd if=/dev/nbd12 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:10:05.918 1+0 records in 00:10:05.918 1+0 records out 00:10:05.918 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000839035 s, 4.9 MB/s 00:10:05.918 18:08:16 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:10:05.918 18:08:16 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@890 -- # size=4096 00:10:05.918 18:08:16 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:10:05.918 18:08:16 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:10:05.918 18:08:16 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@893 -- # return 0 00:10:05.918 18:08:16 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:10:05.918 18:08:16 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 6 )) 00:10:05.918 18:08:16 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme3n1 /dev/nbd13 00:10:06.178 /dev/nbd13 00:10:06.178 18:08:16 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@17 -- # basename /dev/nbd13 00:10:06.178 18:08:16 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@17 -- # waitfornbd nbd13 00:10:06.178 18:08:16 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@872 -- # local nbd_name=nbd13 00:10:06.178 18:08:16 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@873 -- # local i 00:10:06.178 18:08:16 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:10:06.178 18:08:16 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:10:06.178 18:08:16 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@876 -- # grep -q -w nbd13 /proc/partitions 00:10:06.178 18:08:16 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@877 -- # break 00:10:06.178 18:08:16 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:10:06.178 18:08:16 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:10:06.178 18:08:16 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@889 -- # dd if=/dev/nbd13 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:10:06.178 1+0 records in 00:10:06.178 1+0 records out 00:10:06.178 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000749283 s, 5.5 MB/s 00:10:06.178 18:08:16 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:10:06.178 18:08:16 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@890 -- # size=4096 00:10:06.178 18:08:16 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:10:06.178 18:08:16 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:10:06.178 18:08:16 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@893 -- # return 0 00:10:06.178 18:08:16 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:10:06.178 18:08:16 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 6 )) 00:10:06.178 18:08:16 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@95 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:10:06.178 18:08:16 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:10:06.178 18:08:16 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:10:06.436 18:08:16 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[ 00:10:06.436 { 00:10:06.436 "nbd_device": "/dev/nbd0", 00:10:06.436 "bdev_name": "Nvme0n1" 00:10:06.436 }, 00:10:06.436 { 00:10:06.436 "nbd_device": "/dev/nbd1", 00:10:06.436 "bdev_name": "Nvme1n1" 00:10:06.436 }, 00:10:06.436 { 00:10:06.436 "nbd_device": "/dev/nbd10", 00:10:06.436 "bdev_name": "Nvme2n1" 00:10:06.436 }, 00:10:06.436 { 00:10:06.436 "nbd_device": "/dev/nbd11", 00:10:06.436 "bdev_name": "Nvme2n2" 00:10:06.436 }, 00:10:06.436 { 00:10:06.436 "nbd_device": "/dev/nbd12", 00:10:06.436 "bdev_name": "Nvme2n3" 00:10:06.436 }, 00:10:06.436 { 00:10:06.436 "nbd_device": "/dev/nbd13", 00:10:06.436 "bdev_name": "Nvme3n1" 00:10:06.436 } 00:10:06.436 ]' 00:10:06.436 18:08:16 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@64 -- # echo '[ 00:10:06.436 { 00:10:06.436 "nbd_device": "/dev/nbd0", 00:10:06.436 "bdev_name": "Nvme0n1" 00:10:06.436 }, 00:10:06.436 { 00:10:06.436 "nbd_device": "/dev/nbd1", 00:10:06.436 "bdev_name": "Nvme1n1" 00:10:06.436 }, 00:10:06.436 { 00:10:06.436 "nbd_device": "/dev/nbd10", 00:10:06.436 "bdev_name": "Nvme2n1" 00:10:06.436 }, 00:10:06.436 { 00:10:06.436 "nbd_device": "/dev/nbd11", 00:10:06.436 "bdev_name": "Nvme2n2" 00:10:06.436 }, 00:10:06.436 { 00:10:06.436 "nbd_device": "/dev/nbd12", 00:10:06.436 "bdev_name": "Nvme2n3" 00:10:06.436 }, 00:10:06.436 { 00:10:06.436 "nbd_device": "/dev/nbd13", 00:10:06.436 "bdev_name": "Nvme3n1" 00:10:06.436 } 00:10:06.436 ]' 00:10:06.436 18:08:16 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:10:06.436 18:08:17 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@64 -- # nbd_disks_name='/dev/nbd0 00:10:06.436 /dev/nbd1 00:10:06.436 /dev/nbd10 00:10:06.436 /dev/nbd11 00:10:06.436 /dev/nbd12 00:10:06.436 /dev/nbd13' 00:10:06.436 18:08:17 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@65 -- # echo '/dev/nbd0 00:10:06.436 /dev/nbd1 00:10:06.436 /dev/nbd10 00:10:06.436 /dev/nbd11 00:10:06.436 /dev/nbd12 00:10:06.436 /dev/nbd13' 00:10:06.436 18:08:17 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:10:06.695 18:08:17 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@65 -- # count=6 00:10:06.695 18:08:17 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@66 -- # echo 6 00:10:06.695 18:08:17 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@95 -- # count=6 00:10:06.695 18:08:17 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@96 -- # '[' 6 -ne 6 ']' 00:10:06.695 18:08:17 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@100 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1 /dev/nbd10 /dev/nbd11 /dev/nbd12 /dev/nbd13' write 00:10:06.695 18:08:17 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13') 00:10:06.695 18:08:17 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@70 -- # local nbd_list 00:10:06.695 18:08:17 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@71 -- # local operation=write 00:10:06.695 18:08:17 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest 00:10:06.695 18:08:17 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@74 -- # '[' write = write ']' 00:10:06.695 18:08:17 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@76 -- # dd if=/dev/urandom of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest bs=4096 count=256 00:10:06.695 256+0 records in 00:10:06.695 256+0 records out 00:10:06.695 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0119243 s, 87.9 MB/s 00:10:06.695 18:08:17 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:10:06.695 18:08:17 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest of=/dev/nbd0 bs=4096 count=256 oflag=direct 00:10:06.695 256+0 records in 00:10:06.695 256+0 records out 00:10:06.695 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.134687 s, 7.8 MB/s 00:10:06.695 18:08:17 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:10:06.695 18:08:17 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest of=/dev/nbd1 bs=4096 count=256 oflag=direct 00:10:06.953 256+0 records in 00:10:06.953 256+0 records out 00:10:06.953 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.143536 s, 7.3 MB/s 00:10:06.953 18:08:17 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:10:06.953 18:08:17 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest of=/dev/nbd10 bs=4096 count=256 oflag=direct 00:10:06.953 256+0 records in 00:10:06.953 256+0 records out 00:10:06.953 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.157035 s, 6.7 MB/s 00:10:06.953 18:08:17 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:10:06.953 18:08:17 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest of=/dev/nbd11 bs=4096 count=256 oflag=direct 00:10:07.211 256+0 records in 00:10:07.211 256+0 records out 00:10:07.211 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.150762 s, 7.0 MB/s 00:10:07.211 18:08:17 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:10:07.211 18:08:17 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest of=/dev/nbd12 bs=4096 count=256 oflag=direct 00:10:07.470 256+0 records in 00:10:07.470 256+0 records out 00:10:07.470 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.143253 s, 7.3 MB/s 00:10:07.470 18:08:17 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:10:07.470 18:08:17 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest of=/dev/nbd13 bs=4096 count=256 oflag=direct 00:10:07.470 256+0 records in 00:10:07.470 256+0 records out 00:10:07.470 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.153753 s, 6.8 MB/s 00:10:07.470 18:08:17 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@101 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1 /dev/nbd10 /dev/nbd11 /dev/nbd12 /dev/nbd13' verify 00:10:07.470 18:08:17 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13') 00:10:07.470 18:08:17 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@70 -- # local nbd_list 00:10:07.470 18:08:17 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@71 -- # local operation=verify 00:10:07.470 18:08:17 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest 00:10:07.470 18:08:17 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@74 -- # '[' verify = write ']' 00:10:07.470 18:08:17 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@80 -- # '[' verify = verify ']' 00:10:07.470 18:08:17 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:10:07.470 18:08:17 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest /dev/nbd0 00:10:07.470 18:08:17 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:10:07.470 18:08:17 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest /dev/nbd1 00:10:07.470 18:08:17 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:10:07.470 18:08:17 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest /dev/nbd10 00:10:07.470 18:08:17 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:10:07.470 18:08:17 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest /dev/nbd11 00:10:07.470 18:08:18 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:10:07.470 18:08:18 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest /dev/nbd12 00:10:07.470 18:08:18 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:10:07.470 18:08:18 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest /dev/nbd13 00:10:07.470 18:08:18 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@85 -- # rm /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest 00:10:07.470 18:08:18 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@103 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1 /dev/nbd10 /dev/nbd11 /dev/nbd12 /dev/nbd13' 00:10:07.470 18:08:18 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:10:07.470 18:08:18 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13') 00:10:07.470 18:08:18 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@50 -- # local nbd_list 00:10:07.470 18:08:18 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@51 -- # local i 00:10:07.470 18:08:18 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:10:07.470 18:08:18 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:10:07.729 18:08:18 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:10:07.729 18:08:18 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:10:07.729 18:08:18 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:10:07.729 18:08:18 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:10:07.729 18:08:18 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:10:07.729 18:08:18 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:10:07.729 18:08:18 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:10:07.729 18:08:18 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:10:07.729 18:08:18 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:10:07.729 18:08:18 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:10:07.987 18:08:18 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:10:07.987 18:08:18 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:10:07.987 18:08:18 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:10:07.987 18:08:18 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:10:07.987 18:08:18 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:10:07.987 18:08:18 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:10:07.987 18:08:18 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:10:07.987 18:08:18 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:10:07.987 18:08:18 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:10:07.987 18:08:18 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd10 00:10:08.552 18:08:18 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd10 00:10:08.552 18:08:18 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd10 00:10:08.552 18:08:18 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd10 00:10:08.552 18:08:18 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:10:08.552 18:08:18 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:10:08.552 18:08:18 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd10 /proc/partitions 00:10:08.552 18:08:18 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:10:08.552 18:08:18 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:10:08.552 18:08:18 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:10:08.552 18:08:18 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd11 00:10:08.552 18:08:19 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd11 00:10:08.552 18:08:19 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd11 00:10:08.552 18:08:19 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd11 00:10:08.552 18:08:19 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:10:08.552 18:08:19 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:10:08.552 18:08:19 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd11 /proc/partitions 00:10:08.552 18:08:19 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:10:08.552 18:08:19 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:10:08.552 18:08:19 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:10:08.552 18:08:19 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd12 00:10:08.810 18:08:19 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd12 00:10:08.810 18:08:19 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd12 00:10:08.810 18:08:19 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd12 00:10:08.810 18:08:19 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:10:08.810 18:08:19 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:10:08.810 18:08:19 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd12 /proc/partitions 00:10:08.810 18:08:19 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:10:08.810 18:08:19 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:10:08.810 18:08:19 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:10:08.810 18:08:19 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd13 00:10:09.068 18:08:19 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd13 00:10:09.068 18:08:19 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd13 00:10:09.068 18:08:19 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd13 00:10:09.068 18:08:19 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:10:09.068 18:08:19 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:10:09.068 18:08:19 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd13 /proc/partitions 00:10:09.068 18:08:19 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:10:09.068 18:08:19 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:10:09.068 18:08:19 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@104 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:10:09.068 18:08:19 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:10:09.068 18:08:19 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:10:09.326 18:08:19 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:10:09.326 18:08:19 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@64 -- # echo '[]' 00:10:09.326 18:08:19 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:10:09.326 18:08:19 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:10:09.326 18:08:19 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:10:09.326 18:08:19 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@65 -- # echo '' 00:10:09.326 18:08:19 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@65 -- # true 00:10:09.326 18:08:19 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@65 -- # count=0 00:10:09.326 18:08:19 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@66 -- # echo 0 00:10:09.326 18:08:19 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@104 -- # count=0 00:10:09.326 18:08:19 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@105 -- # '[' 0 -ne 0 ']' 00:10:09.326 18:08:19 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@109 -- # return 0 00:10:09.326 18:08:19 blockdev_nvme.bdev_nbd -- bdev/blockdev.sh@323 -- # nbd_with_lvol_verify /var/tmp/spdk-nbd.sock /dev/nbd0 00:10:09.326 18:08:19 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@131 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:10:09.326 18:08:19 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@132 -- # local nbd=/dev/nbd0 00:10:09.326 18:08:19 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@134 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create -b malloc_lvol_verify 16 512 00:10:09.891 malloc_lvol_verify 00:10:09.891 18:08:20 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@135 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_lvol_create_lvstore malloc_lvol_verify lvs 00:10:10.148 67ea149a-4145-4937-af32-711e41cc18e2 00:10:10.148 18:08:20 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@136 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_lvol_create lvol 4 -l lvs 00:10:10.406 5a6f22b5-b7bd-4e60-a5a7-1283ae6f5877 00:10:10.406 18:08:20 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@137 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk lvs/lvol /dev/nbd0 00:10:10.663 /dev/nbd0 00:10:10.663 18:08:21 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@139 -- # wait_for_nbd_set_capacity /dev/nbd0 00:10:10.663 18:08:21 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@146 -- # local nbd=nbd0 00:10:10.663 18:08:21 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@148 -- # [[ -e /sys/block/nbd0/size ]] 00:10:10.663 18:08:21 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@150 -- # (( 8192 == 0 )) 00:10:10.663 18:08:21 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@141 -- # mkfs.ext4 /dev/nbd0 00:10:10.663 mke2fs 1.47.0 (5-Feb-2023) 00:10:10.663 Discarding device blocks: 0/4096 done 00:10:10.663 Creating filesystem with 4096 1k blocks and 1024 inodes 00:10:10.663 00:10:10.663 Allocating group tables: 0/1 done 00:10:10.663 Writing inode tables: 0/1 done 00:10:10.663 Creating journal (1024 blocks): done 00:10:10.663 Writing superblocks and filesystem accounting information: 0/1 done 00:10:10.663 00:10:10.663 18:08:21 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@142 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock /dev/nbd0 00:10:10.663 18:08:21 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:10:10.664 18:08:21 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0') 00:10:10.664 18:08:21 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@50 -- # local nbd_list 00:10:10.664 18:08:21 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@51 -- # local i 00:10:10.664 18:08:21 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:10:10.664 18:08:21 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:10:10.921 18:08:21 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:10:10.921 18:08:21 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:10:10.921 18:08:21 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:10:10.921 18:08:21 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:10:10.921 18:08:21 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:10:10.921 18:08:21 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:10:10.921 18:08:21 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:10:10.921 18:08:21 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:10:10.921 18:08:21 blockdev_nvme.bdev_nbd -- bdev/blockdev.sh@325 -- # killprocess 61140 00:10:10.921 18:08:21 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@954 -- # '[' -z 61140 ']' 00:10:10.921 18:08:21 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@958 -- # kill -0 61140 00:10:10.921 18:08:21 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@959 -- # uname 00:10:10.921 18:08:21 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:10:10.921 18:08:21 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 61140 00:10:10.921 18:08:21 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:10:10.921 18:08:21 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:10:10.921 killing process with pid 61140 00:10:10.921 18:08:21 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@972 -- # echo 'killing process with pid 61140' 00:10:10.921 18:08:21 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@973 -- # kill 61140 00:10:10.921 18:08:21 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@978 -- # wait 61140 00:10:13.471 18:08:23 blockdev_nvme.bdev_nbd -- bdev/blockdev.sh@326 -- # trap - SIGINT SIGTERM EXIT 00:10:13.471 00:10:13.471 real 0m13.560s 00:10:13.471 user 0m17.213s 00:10:13.471 sys 0m5.259s 00:10:13.471 18:08:23 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@1130 -- # xtrace_disable 00:10:13.471 ************************************ 00:10:13.471 END TEST bdev_nbd 00:10:13.471 18:08:23 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@10 -- # set +x 00:10:13.471 ************************************ 00:10:13.471 18:08:23 blockdev_nvme -- bdev/blockdev.sh@800 -- # [[ y == y ]] 00:10:13.471 18:08:23 blockdev_nvme -- bdev/blockdev.sh@801 -- # '[' nvme = nvme ']' 00:10:13.471 skipping fio tests on NVMe due to multi-ns failures. 00:10:13.471 18:08:23 blockdev_nvme -- bdev/blockdev.sh@803 -- # echo 'skipping fio tests on NVMe due to multi-ns failures.' 00:10:13.471 18:08:23 blockdev_nvme -- bdev/blockdev.sh@812 -- # trap cleanup SIGINT SIGTERM EXIT 00:10:13.471 18:08:23 blockdev_nvme -- bdev/blockdev.sh@814 -- # run_test bdev_verify /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 4096 -w verify -t 5 -C -m 0x3 '' 00:10:13.471 18:08:23 blockdev_nvme -- common/autotest_common.sh@1105 -- # '[' 16 -le 1 ']' 00:10:13.471 18:08:23 blockdev_nvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:10:13.471 18:08:23 blockdev_nvme -- common/autotest_common.sh@10 -- # set +x 00:10:13.471 ************************************ 00:10:13.471 START TEST bdev_verify 00:10:13.472 ************************************ 00:10:13.472 18:08:23 blockdev_nvme.bdev_verify -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 4096 -w verify -t 5 -C -m 0x3 '' 00:10:13.472 [2024-12-06 18:08:23.896824] Starting SPDK v25.01-pre git sha1 1148849d6 / DPDK 24.03.0 initialization... 00:10:13.472 [2024-12-06 18:08:23.896957] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid61549 ] 00:10:13.729 [2024-12-06 18:08:24.085910] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:10:13.729 [2024-12-06 18:08:24.207375] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:10:13.729 [2024-12-06 18:08:24.207407] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:10:14.702 Running I/O for 5 seconds... 00:10:16.572 17792.00 IOPS, 69.50 MiB/s [2024-12-06T18:08:28.526Z] 19168.00 IOPS, 74.88 MiB/s [2024-12-06T18:08:29.094Z] 20480.00 IOPS, 80.00 MiB/s [2024-12-06T18:08:30.035Z] 21056.00 IOPS, 82.25 MiB/s [2024-12-06T18:08:30.295Z] 21235.20 IOPS, 82.95 MiB/s 00:10:19.719 Latency(us) 00:10:19.719 [2024-12-06T18:08:30.295Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:10:19.719 Job: Nvme0n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:10:19.719 Verification LBA range: start 0x0 length 0xbd0bd 00:10:19.719 Nvme0n1 : 5.07 1742.71 6.81 0.00 0.00 73290.10 15370.69 122965.54 00:10:19.719 Job: Nvme0n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:10:19.719 Verification LBA range: start 0xbd0bd length 0xbd0bd 00:10:19.719 Nvme0n1 : 5.06 1772.07 6.92 0.00 0.00 72067.23 15265.41 105699.83 00:10:19.719 Job: Nvme1n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:10:19.719 Verification LBA range: start 0x0 length 0xa0000 00:10:19.719 Nvme1n1 : 5.07 1742.17 6.81 0.00 0.00 73158.57 14739.02 122965.54 00:10:19.719 Job: Nvme1n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:10:19.719 Verification LBA range: start 0xa0000 length 0xa0000 00:10:19.719 Nvme1n1 : 5.06 1771.04 6.92 0.00 0.00 72003.53 16107.64 96014.19 00:10:19.719 Job: Nvme2n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:10:19.719 Verification LBA range: start 0x0 length 0x80000 00:10:19.719 Nvme2n1 : 5.07 1741.66 6.80 0.00 0.00 72958.38 14212.63 120438.85 00:10:19.719 Job: Nvme2n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:10:19.719 Verification LBA range: start 0x80000 length 0x80000 00:10:19.719 Nvme2n1 : 5.06 1770.53 6.92 0.00 0.00 71742.43 16318.20 99383.11 00:10:19.719 Job: Nvme2n2 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:10:19.719 Verification LBA range: start 0x0 length 0x80000 00:10:19.719 Nvme2n2 : 5.07 1740.37 6.80 0.00 0.00 72831.30 16002.36 118754.39 00:10:19.719 Job: Nvme2n2 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:10:19.719 Verification LBA range: start 0x80000 length 0x80000 00:10:19.719 Nvme2n2 : 5.06 1769.97 6.91 0.00 0.00 71650.96 16212.92 100646.45 00:10:19.719 Job: Nvme2n3 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:10:19.719 Verification LBA range: start 0x0 length 0x80000 00:10:19.719 Nvme2n3 : 5.08 1739.97 6.80 0.00 0.00 72715.21 15370.69 116227.70 00:10:19.719 Job: Nvme2n3 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:10:19.719 Verification LBA range: start 0x80000 length 0x80000 00:10:19.719 Nvme2n3 : 5.06 1769.54 6.91 0.00 0.00 71544.11 15370.69 104436.49 00:10:19.719 Job: Nvme3n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:10:19.719 Verification LBA range: start 0x0 length 0x20000 00:10:19.719 Nvme3n1 : 5.08 1739.57 6.80 0.00 0.00 72608.21 14949.58 124650.00 00:10:19.719 Job: Nvme3n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:10:19.719 Verification LBA range: start 0x20000 length 0x20000 00:10:19.719 Nvme3n1 : 5.07 1779.59 6.95 0.00 0.00 71056.60 2737.25 105278.71 00:10:19.719 [2024-12-06T18:08:30.295Z] =================================================================================================================== 00:10:19.719 [2024-12-06T18:08:30.295Z] Total : 21079.17 82.34 0.00 0.00 72296.98 2737.25 124650.00 00:10:21.097 00:10:21.097 real 0m7.693s 00:10:21.097 user 0m14.205s 00:10:21.097 sys 0m0.311s 00:10:21.097 18:08:31 blockdev_nvme.bdev_verify -- common/autotest_common.sh@1130 -- # xtrace_disable 00:10:21.097 18:08:31 blockdev_nvme.bdev_verify -- common/autotest_common.sh@10 -- # set +x 00:10:21.097 ************************************ 00:10:21.097 END TEST bdev_verify 00:10:21.097 ************************************ 00:10:21.097 18:08:31 blockdev_nvme -- bdev/blockdev.sh@815 -- # run_test bdev_verify_big_io /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 65536 -w verify -t 5 -C -m 0x3 '' 00:10:21.097 18:08:31 blockdev_nvme -- common/autotest_common.sh@1105 -- # '[' 16 -le 1 ']' 00:10:21.097 18:08:31 blockdev_nvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:10:21.097 18:08:31 blockdev_nvme -- common/autotest_common.sh@10 -- # set +x 00:10:21.097 ************************************ 00:10:21.097 START TEST bdev_verify_big_io 00:10:21.097 ************************************ 00:10:21.097 18:08:31 blockdev_nvme.bdev_verify_big_io -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 65536 -w verify -t 5 -C -m 0x3 '' 00:10:21.097 [2024-12-06 18:08:31.658636] Starting SPDK v25.01-pre git sha1 1148849d6 / DPDK 24.03.0 initialization... 00:10:21.097 [2024-12-06 18:08:31.658760] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid61647 ] 00:10:21.358 [2024-12-06 18:08:31.843521] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:10:21.616 [2024-12-06 18:08:31.962710] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:10:21.616 [2024-12-06 18:08:31.962719] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:10:22.555 Running I/O for 5 seconds... 00:10:25.831 1619.00 IOPS, 101.19 MiB/s [2024-12-06T18:08:37.340Z] 2237.50 IOPS, 139.84 MiB/s [2024-12-06T18:08:38.716Z] 2271.00 IOPS, 141.94 MiB/s [2024-12-06T18:08:38.716Z] 2383.75 IOPS, 148.98 MiB/s [2024-12-06T18:08:38.716Z] 2456.40 IOPS, 153.53 MiB/s 00:10:28.140 Latency(us) 00:10:28.140 [2024-12-06T18:08:38.716Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:10:28.140 Job: Nvme0n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 65536) 00:10:28.140 Verification LBA range: start 0x0 length 0xbd0b 00:10:28.140 Nvme0n1 : 5.56 172.69 10.79 0.00 0.00 714543.20 16844.59 983724.31 00:10:28.140 Job: Nvme0n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 65536) 00:10:28.140 Verification LBA range: start 0xbd0b length 0xbd0b 00:10:28.140 Nvme0n1 : 5.61 182.19 11.39 0.00 0.00 690289.10 22319.09 744531.07 00:10:28.140 Job: Nvme1n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 65536) 00:10:28.140 Verification LBA range: start 0x0 length 0xa000 00:10:28.140 Nvme1n1 : 5.66 176.89 11.06 0.00 0.00 678229.44 65693.92 798433.77 00:10:28.141 Job: Nvme1n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 65536) 00:10:28.141 Verification LBA range: start 0xa000 length 0xa000 00:10:28.141 Nvme1n1 : 5.61 178.34 11.15 0.00 0.00 684244.86 45480.40 619881.07 00:10:28.141 Job: Nvme2n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 65536) 00:10:28.141 Verification LBA range: start 0x0 length 0x8000 00:10:28.141 Nvme2n1 : 5.67 180.71 11.29 0.00 0.00 645040.32 33899.75 606405.40 00:10:28.141 Job: Nvme2n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 65536) 00:10:28.141 Verification LBA range: start 0x8000 length 0x8000 00:10:28.141 Nvme2n1 : 5.61 178.12 11.13 0.00 0.00 669502.93 46112.08 629987.83 00:10:28.141 Job: Nvme2n2 (Core Mask 0x1, workload: verify, depth: 128, IO size: 65536) 00:10:28.141 Verification LBA range: start 0x0 length 0x8000 00:10:28.141 Nvme2n2 : 5.69 183.92 11.50 0.00 0.00 614574.45 23371.87 599667.56 00:10:28.141 Job: Nvme2n2 (Core Mask 0x2, workload: verify, depth: 128, IO size: 65536) 00:10:28.141 Verification LBA range: start 0x8000 length 0x8000 00:10:28.141 Nvme2n2 : 5.62 182.33 11.40 0.00 0.00 644036.16 47585.98 643463.51 00:10:28.141 Job: Nvme2n3 (Core Mask 0x1, workload: verify, depth: 128, IO size: 65536) 00:10:28.141 Verification LBA range: start 0x0 length 0x8000 00:10:28.141 Nvme2n3 : 5.77 196.42 12.28 0.00 0.00 564244.15 10422.59 1172383.77 00:10:28.141 Job: Nvme2n3 (Core Mask 0x2, workload: verify, depth: 128, IO size: 65536) 00:10:28.141 Verification LBA range: start 0x8000 length 0x8000 00:10:28.141 Nvme2n3 : 5.65 185.23 11.58 0.00 0.00 619092.39 28846.37 663677.02 00:10:28.141 Job: Nvme3n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 65536) 00:10:28.141 Verification LBA range: start 0x0 length 0x2000 00:10:28.141 Nvme3n1 : 5.84 239.07 14.94 0.00 0.00 452513.43 700.76 1192597.28 00:10:28.141 Job: Nvme3n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 65536) 00:10:28.141 Verification LBA range: start 0x2000 length 0x2000 00:10:28.141 Nvme3n1 : 5.67 199.46 12.47 0.00 0.00 564571.31 3211.00 670414.86 00:10:28.141 [2024-12-06T18:08:38.717Z] =================================================================================================================== 00:10:28.141 [2024-12-06T18:08:38.717Z] Total : 2255.36 140.96 0.00 0.00 621480.73 700.76 1192597.28 00:10:29.515 00:10:29.515 real 0m8.515s 00:10:29.515 user 0m15.852s 00:10:29.515 sys 0m0.311s 00:10:29.515 18:08:40 blockdev_nvme.bdev_verify_big_io -- common/autotest_common.sh@1130 -- # xtrace_disable 00:10:29.515 18:08:40 blockdev_nvme.bdev_verify_big_io -- common/autotest_common.sh@10 -- # set +x 00:10:29.515 ************************************ 00:10:29.515 END TEST bdev_verify_big_io 00:10:29.515 ************************************ 00:10:29.773 18:08:40 blockdev_nvme -- bdev/blockdev.sh@816 -- # run_test bdev_write_zeroes /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:10:29.773 18:08:40 blockdev_nvme -- common/autotest_common.sh@1105 -- # '[' 13 -le 1 ']' 00:10:29.773 18:08:40 blockdev_nvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:10:29.773 18:08:40 blockdev_nvme -- common/autotest_common.sh@10 -- # set +x 00:10:29.773 ************************************ 00:10:29.773 START TEST bdev_write_zeroes 00:10:29.773 ************************************ 00:10:29.773 18:08:40 blockdev_nvme.bdev_write_zeroes -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:10:29.773 [2024-12-06 18:08:40.229398] Starting SPDK v25.01-pre git sha1 1148849d6 / DPDK 24.03.0 initialization... 00:10:29.773 [2024-12-06 18:08:40.229525] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid61761 ] 00:10:30.031 [2024-12-06 18:08:40.411001] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:10:30.031 [2024-12-06 18:08:40.529972] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:10:30.977 Running I/O for 1 seconds... 00:10:31.909 71305.00 IOPS, 278.54 MiB/s 00:10:31.909 Latency(us) 00:10:31.909 [2024-12-06T18:08:42.485Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:10:31.909 Job: Nvme0n1 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:10:31.909 Nvme0n1 : 1.02 11817.94 46.16 0.00 0.00 10811.44 5764.01 41058.70 00:10:31.909 Job: Nvme1n1 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:10:31.909 Nvme1n1 : 1.02 11876.92 46.39 0.00 0.00 10746.76 9369.81 29478.04 00:10:31.909 Job: Nvme2n1 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:10:31.909 Nvme2n1 : 1.02 11818.59 46.17 0.00 0.00 10771.93 8896.05 34741.98 00:10:31.909 Job: Nvme2n2 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:10:31.909 Nvme2n2 : 1.02 11854.35 46.31 0.00 0.00 10685.89 8790.77 34320.86 00:10:31.909 Job: Nvme2n3 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:10:31.909 Nvme2n3 : 1.02 11843.65 46.26 0.00 0.00 10682.46 8264.38 34110.30 00:10:31.909 Job: Nvme3n1 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:10:31.909 Nvme3n1 : 1.02 11833.01 46.22 0.00 0.00 10671.87 6948.40 34320.86 00:10:31.909 [2024-12-06T18:08:42.485Z] =================================================================================================================== 00:10:31.909 [2024-12-06T18:08:42.486Z] Total : 71044.45 277.52 0.00 0.00 10728.28 5764.01 41058.70 00:10:33.288 00:10:33.288 real 0m3.313s 00:10:33.288 user 0m2.932s 00:10:33.288 sys 0m0.266s 00:10:33.288 ************************************ 00:10:33.288 END TEST bdev_write_zeroes 00:10:33.288 ************************************ 00:10:33.288 18:08:43 blockdev_nvme.bdev_write_zeroes -- common/autotest_common.sh@1130 -- # xtrace_disable 00:10:33.288 18:08:43 blockdev_nvme.bdev_write_zeroes -- common/autotest_common.sh@10 -- # set +x 00:10:33.288 18:08:43 blockdev_nvme -- bdev/blockdev.sh@819 -- # run_test bdev_json_nonenclosed /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/nonenclosed.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:10:33.288 18:08:43 blockdev_nvme -- common/autotest_common.sh@1105 -- # '[' 13 -le 1 ']' 00:10:33.288 18:08:43 blockdev_nvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:10:33.288 18:08:43 blockdev_nvme -- common/autotest_common.sh@10 -- # set +x 00:10:33.288 ************************************ 00:10:33.288 START TEST bdev_json_nonenclosed 00:10:33.288 ************************************ 00:10:33.288 18:08:43 blockdev_nvme.bdev_json_nonenclosed -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/nonenclosed.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:10:33.288 [2024-12-06 18:08:43.607566] Starting SPDK v25.01-pre git sha1 1148849d6 / DPDK 24.03.0 initialization... 00:10:33.288 [2024-12-06 18:08:43.607684] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid61815 ] 00:10:33.288 [2024-12-06 18:08:43.789489] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:10:33.616 [2024-12-06 18:08:43.900879] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:10:33.616 [2024-12-06 18:08:43.901160] json_config.c: 608:json_config_prepare_ctx: *ERROR*: Invalid JSON configuration: not enclosed in {}. 00:10:33.616 [2024-12-06 18:08:43.901192] rpc.c: 190:spdk_rpc_server_finish: *ERROR*: No server listening on provided address: 00:10:33.616 [2024-12-06 18:08:43.901205] app.c:1064:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:10:33.616 ************************************ 00:10:33.616 END TEST bdev_json_nonenclosed 00:10:33.616 ************************************ 00:10:33.616 00:10:33.616 real 0m0.646s 00:10:33.616 user 0m0.389s 00:10:33.616 sys 0m0.152s 00:10:33.616 18:08:44 blockdev_nvme.bdev_json_nonenclosed -- common/autotest_common.sh@1130 -- # xtrace_disable 00:10:33.616 18:08:44 blockdev_nvme.bdev_json_nonenclosed -- common/autotest_common.sh@10 -- # set +x 00:10:33.883 18:08:44 blockdev_nvme -- bdev/blockdev.sh@822 -- # run_test bdev_json_nonarray /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/nonarray.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:10:33.883 18:08:44 blockdev_nvme -- common/autotest_common.sh@1105 -- # '[' 13 -le 1 ']' 00:10:33.884 18:08:44 blockdev_nvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:10:33.884 18:08:44 blockdev_nvme -- common/autotest_common.sh@10 -- # set +x 00:10:33.884 ************************************ 00:10:33.884 START TEST bdev_json_nonarray 00:10:33.884 ************************************ 00:10:33.884 18:08:44 blockdev_nvme.bdev_json_nonarray -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/nonarray.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:10:33.884 [2024-12-06 18:08:44.327292] Starting SPDK v25.01-pre git sha1 1148849d6 / DPDK 24.03.0 initialization... 00:10:33.884 [2024-12-06 18:08:44.327562] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid61839 ] 00:10:34.143 [2024-12-06 18:08:44.509649] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:10:34.143 [2024-12-06 18:08:44.621452] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:10:34.143 [2024-12-06 18:08:44.621558] json_config.c: 614:json_config_prepare_ctx: *ERROR*: Invalid JSON configuration: 'subsystems' should be an array. 00:10:34.143 [2024-12-06 18:08:44.621589] rpc.c: 190:spdk_rpc_server_finish: *ERROR*: No server listening on provided address: 00:10:34.143 [2024-12-06 18:08:44.621601] app.c:1064:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:10:34.402 00:10:34.402 real 0m0.642s 00:10:34.402 user 0m0.407s 00:10:34.402 sys 0m0.130s 00:10:34.402 18:08:44 blockdev_nvme.bdev_json_nonarray -- common/autotest_common.sh@1130 -- # xtrace_disable 00:10:34.402 ************************************ 00:10:34.402 END TEST bdev_json_nonarray 00:10:34.402 ************************************ 00:10:34.402 18:08:44 blockdev_nvme.bdev_json_nonarray -- common/autotest_common.sh@10 -- # set +x 00:10:34.402 18:08:44 blockdev_nvme -- bdev/blockdev.sh@824 -- # [[ nvme == bdev ]] 00:10:34.402 18:08:44 blockdev_nvme -- bdev/blockdev.sh@832 -- # [[ nvme == gpt ]] 00:10:34.402 18:08:44 blockdev_nvme -- bdev/blockdev.sh@836 -- # [[ nvme == crypto_sw ]] 00:10:34.402 18:08:44 blockdev_nvme -- bdev/blockdev.sh@848 -- # trap - SIGINT SIGTERM EXIT 00:10:34.402 18:08:44 blockdev_nvme -- bdev/blockdev.sh@849 -- # cleanup 00:10:34.402 18:08:44 blockdev_nvme -- bdev/blockdev.sh@23 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/aiofile 00:10:34.402 18:08:44 blockdev_nvme -- bdev/blockdev.sh@24 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json 00:10:34.402 18:08:44 blockdev_nvme -- bdev/blockdev.sh@26 -- # [[ nvme == rbd ]] 00:10:34.402 18:08:44 blockdev_nvme -- bdev/blockdev.sh@30 -- # [[ nvme == daos ]] 00:10:34.402 18:08:44 blockdev_nvme -- bdev/blockdev.sh@34 -- # [[ nvme = \g\p\t ]] 00:10:34.402 18:08:44 blockdev_nvme -- bdev/blockdev.sh@40 -- # [[ nvme == xnvme ]] 00:10:34.402 ************************************ 00:10:34.402 END TEST blockdev_nvme 00:10:34.402 ************************************ 00:10:34.402 00:10:34.402 real 0m44.868s 00:10:34.402 user 1m5.408s 00:10:34.402 sys 0m8.318s 00:10:34.402 18:08:44 blockdev_nvme -- common/autotest_common.sh@1130 -- # xtrace_disable 00:10:34.402 18:08:44 blockdev_nvme -- common/autotest_common.sh@10 -- # set +x 00:10:34.661 18:08:45 -- spdk/autotest.sh@209 -- # uname -s 00:10:34.661 18:08:45 -- spdk/autotest.sh@209 -- # [[ Linux == Linux ]] 00:10:34.661 18:08:45 -- spdk/autotest.sh@210 -- # run_test blockdev_nvme_gpt /home/vagrant/spdk_repo/spdk/test/bdev/blockdev.sh gpt 00:10:34.661 18:08:45 -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:10:34.661 18:08:45 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:10:34.661 18:08:45 -- common/autotest_common.sh@10 -- # set +x 00:10:34.661 ************************************ 00:10:34.661 START TEST blockdev_nvme_gpt 00:10:34.661 ************************************ 00:10:34.661 18:08:45 blockdev_nvme_gpt -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/bdev/blockdev.sh gpt 00:10:34.661 * Looking for test storage... 00:10:34.661 * Found test storage at /home/vagrant/spdk_repo/spdk/test/bdev 00:10:34.661 18:08:45 blockdev_nvme_gpt -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:10:34.661 18:08:45 blockdev_nvme_gpt -- common/autotest_common.sh@1711 -- # lcov --version 00:10:34.661 18:08:45 blockdev_nvme_gpt -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:10:34.920 18:08:45 blockdev_nvme_gpt -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:10:34.920 18:08:45 blockdev_nvme_gpt -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:10:34.920 18:08:45 blockdev_nvme_gpt -- scripts/common.sh@333 -- # local ver1 ver1_l 00:10:34.920 18:08:45 blockdev_nvme_gpt -- scripts/common.sh@334 -- # local ver2 ver2_l 00:10:34.920 18:08:45 blockdev_nvme_gpt -- scripts/common.sh@336 -- # IFS=.-: 00:10:34.920 18:08:45 blockdev_nvme_gpt -- scripts/common.sh@336 -- # read -ra ver1 00:10:34.920 18:08:45 blockdev_nvme_gpt -- scripts/common.sh@337 -- # IFS=.-: 00:10:34.920 18:08:45 blockdev_nvme_gpt -- scripts/common.sh@337 -- # read -ra ver2 00:10:34.920 18:08:45 blockdev_nvme_gpt -- scripts/common.sh@338 -- # local 'op=<' 00:10:34.920 18:08:45 blockdev_nvme_gpt -- scripts/common.sh@340 -- # ver1_l=2 00:10:34.920 18:08:45 blockdev_nvme_gpt -- scripts/common.sh@341 -- # ver2_l=1 00:10:34.920 18:08:45 blockdev_nvme_gpt -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:10:34.920 18:08:45 blockdev_nvme_gpt -- scripts/common.sh@344 -- # case "$op" in 00:10:34.920 18:08:45 blockdev_nvme_gpt -- scripts/common.sh@345 -- # : 1 00:10:34.920 18:08:45 blockdev_nvme_gpt -- scripts/common.sh@364 -- # (( v = 0 )) 00:10:34.920 18:08:45 blockdev_nvme_gpt -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:10:34.920 18:08:45 blockdev_nvme_gpt -- scripts/common.sh@365 -- # decimal 1 00:10:34.920 18:08:45 blockdev_nvme_gpt -- scripts/common.sh@353 -- # local d=1 00:10:34.920 18:08:45 blockdev_nvme_gpt -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:10:34.920 18:08:45 blockdev_nvme_gpt -- scripts/common.sh@355 -- # echo 1 00:10:34.920 18:08:45 blockdev_nvme_gpt -- scripts/common.sh@365 -- # ver1[v]=1 00:10:34.920 18:08:45 blockdev_nvme_gpt -- scripts/common.sh@366 -- # decimal 2 00:10:34.920 18:08:45 blockdev_nvme_gpt -- scripts/common.sh@353 -- # local d=2 00:10:34.920 18:08:45 blockdev_nvme_gpt -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:10:34.920 18:08:45 blockdev_nvme_gpt -- scripts/common.sh@355 -- # echo 2 00:10:34.920 18:08:45 blockdev_nvme_gpt -- scripts/common.sh@366 -- # ver2[v]=2 00:10:34.920 18:08:45 blockdev_nvme_gpt -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:10:34.920 18:08:45 blockdev_nvme_gpt -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:10:34.920 18:08:45 blockdev_nvme_gpt -- scripts/common.sh@368 -- # return 0 00:10:34.920 18:08:45 blockdev_nvme_gpt -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:10:34.920 18:08:45 blockdev_nvme_gpt -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:10:34.920 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:34.920 --rc genhtml_branch_coverage=1 00:10:34.920 --rc genhtml_function_coverage=1 00:10:34.920 --rc genhtml_legend=1 00:10:34.920 --rc geninfo_all_blocks=1 00:10:34.920 --rc geninfo_unexecuted_blocks=1 00:10:34.920 00:10:34.920 ' 00:10:34.920 18:08:45 blockdev_nvme_gpt -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:10:34.920 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:34.920 --rc genhtml_branch_coverage=1 00:10:34.920 --rc genhtml_function_coverage=1 00:10:34.920 --rc genhtml_legend=1 00:10:34.920 --rc geninfo_all_blocks=1 00:10:34.920 --rc geninfo_unexecuted_blocks=1 00:10:34.920 00:10:34.920 ' 00:10:34.920 18:08:45 blockdev_nvme_gpt -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:10:34.920 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:34.920 --rc genhtml_branch_coverage=1 00:10:34.920 --rc genhtml_function_coverage=1 00:10:34.920 --rc genhtml_legend=1 00:10:34.920 --rc geninfo_all_blocks=1 00:10:34.920 --rc geninfo_unexecuted_blocks=1 00:10:34.920 00:10:34.920 ' 00:10:34.920 18:08:45 blockdev_nvme_gpt -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:10:34.920 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:34.920 --rc genhtml_branch_coverage=1 00:10:34.920 --rc genhtml_function_coverage=1 00:10:34.920 --rc genhtml_legend=1 00:10:34.920 --rc geninfo_all_blocks=1 00:10:34.920 --rc geninfo_unexecuted_blocks=1 00:10:34.920 00:10:34.920 ' 00:10:34.920 18:08:45 blockdev_nvme_gpt -- bdev/blockdev.sh@10 -- # source /home/vagrant/spdk_repo/spdk/test/bdev/nbd_common.sh 00:10:34.920 18:08:45 blockdev_nvme_gpt -- bdev/nbd_common.sh@6 -- # set -e 00:10:34.920 18:08:45 blockdev_nvme_gpt -- bdev/blockdev.sh@12 -- # rpc_py=rpc_cmd 00:10:34.920 18:08:45 blockdev_nvme_gpt -- bdev/blockdev.sh@13 -- # conf_file=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.json 00:10:34.920 18:08:45 blockdev_nvme_gpt -- bdev/blockdev.sh@14 -- # nonenclosed_conf_file=/home/vagrant/spdk_repo/spdk/test/bdev/nonenclosed.json 00:10:34.920 18:08:45 blockdev_nvme_gpt -- bdev/blockdev.sh@15 -- # nonarray_conf_file=/home/vagrant/spdk_repo/spdk/test/bdev/nonarray.json 00:10:34.920 18:08:45 blockdev_nvme_gpt -- bdev/blockdev.sh@17 -- # export RPC_PIPE_TIMEOUT=30 00:10:34.920 18:08:45 blockdev_nvme_gpt -- bdev/blockdev.sh@17 -- # RPC_PIPE_TIMEOUT=30 00:10:34.920 18:08:45 blockdev_nvme_gpt -- bdev/blockdev.sh@20 -- # : 00:10:34.920 18:08:45 blockdev_nvme_gpt -- bdev/blockdev.sh@707 -- # QOS_DEV_1=Malloc_0 00:10:34.920 18:08:45 blockdev_nvme_gpt -- bdev/blockdev.sh@708 -- # QOS_DEV_2=Null_1 00:10:34.920 18:08:45 blockdev_nvme_gpt -- bdev/blockdev.sh@709 -- # QOS_RUN_TIME=5 00:10:34.920 18:08:45 blockdev_nvme_gpt -- bdev/blockdev.sh@711 -- # uname -s 00:10:34.920 18:08:45 blockdev_nvme_gpt -- bdev/blockdev.sh@711 -- # '[' Linux = Linux ']' 00:10:34.920 18:08:45 blockdev_nvme_gpt -- bdev/blockdev.sh@713 -- # PRE_RESERVED_MEM=0 00:10:34.920 18:08:45 blockdev_nvme_gpt -- bdev/blockdev.sh@719 -- # test_type=gpt 00:10:34.920 18:08:45 blockdev_nvme_gpt -- bdev/blockdev.sh@720 -- # crypto_device= 00:10:34.920 18:08:45 blockdev_nvme_gpt -- bdev/blockdev.sh@721 -- # dek= 00:10:34.920 18:08:45 blockdev_nvme_gpt -- bdev/blockdev.sh@722 -- # env_ctx= 00:10:34.920 18:08:45 blockdev_nvme_gpt -- bdev/blockdev.sh@723 -- # wait_for_rpc= 00:10:34.920 18:08:45 blockdev_nvme_gpt -- bdev/blockdev.sh@724 -- # '[' -n '' ']' 00:10:34.920 18:08:45 blockdev_nvme_gpt -- bdev/blockdev.sh@727 -- # [[ gpt == bdev ]] 00:10:34.920 18:08:45 blockdev_nvme_gpt -- bdev/blockdev.sh@727 -- # [[ gpt == crypto_* ]] 00:10:34.920 18:08:45 blockdev_nvme_gpt -- bdev/blockdev.sh@730 -- # start_spdk_tgt 00:10:34.920 18:08:45 blockdev_nvme_gpt -- bdev/blockdev.sh@47 -- # spdk_tgt_pid=61928 00:10:34.920 18:08:45 blockdev_nvme_gpt -- bdev/blockdev.sh@48 -- # trap 'killprocess "$spdk_tgt_pid"; exit 1' SIGINT SIGTERM EXIT 00:10:34.920 18:08:45 blockdev_nvme_gpt -- bdev/blockdev.sh@46 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt '' '' 00:10:34.920 18:08:45 blockdev_nvme_gpt -- bdev/blockdev.sh@49 -- # waitforlisten 61928 00:10:34.920 18:08:45 blockdev_nvme_gpt -- common/autotest_common.sh@835 -- # '[' -z 61928 ']' 00:10:34.920 18:08:45 blockdev_nvme_gpt -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:10:34.920 18:08:45 blockdev_nvme_gpt -- common/autotest_common.sh@840 -- # local max_retries=100 00:10:34.920 18:08:45 blockdev_nvme_gpt -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:10:34.920 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:10:34.920 18:08:45 blockdev_nvme_gpt -- common/autotest_common.sh@844 -- # xtrace_disable 00:10:34.921 18:08:45 blockdev_nvme_gpt -- common/autotest_common.sh@10 -- # set +x 00:10:34.921 [2024-12-06 18:08:45.397780] Starting SPDK v25.01-pre git sha1 1148849d6 / DPDK 24.03.0 initialization... 00:10:34.921 [2024-12-06 18:08:45.398069] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid61928 ] 00:10:35.179 [2024-12-06 18:08:45.582095] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:10:35.179 [2024-12-06 18:08:45.698385] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:10:36.114 18:08:46 blockdev_nvme_gpt -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:10:36.114 18:08:46 blockdev_nvme_gpt -- common/autotest_common.sh@868 -- # return 0 00:10:36.114 18:08:46 blockdev_nvme_gpt -- bdev/blockdev.sh@731 -- # case "$test_type" in 00:10:36.114 18:08:46 blockdev_nvme_gpt -- bdev/blockdev.sh@739 -- # setup_gpt_conf 00:10:36.114 18:08:46 blockdev_nvme_gpt -- bdev/blockdev.sh@104 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:10:36.682 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:10:36.941 Waiting for block devices as requested 00:10:36.941 0000:00:11.0 (1b36 0010): uio_pci_generic -> nvme 00:10:37.200 0000:00:10.0 (1b36 0010): uio_pci_generic -> nvme 00:10:37.200 0000:00:12.0 (1b36 0010): uio_pci_generic -> nvme 00:10:37.459 0000:00:13.0 (1b36 0010): uio_pci_generic -> nvme 00:10:42.866 * Events for some block/disk devices (0000:00:13.0) were not caught, they may be missing 00:10:42.866 18:08:52 blockdev_nvme_gpt -- bdev/blockdev.sh@105 -- # get_zoned_devs 00:10:42.866 18:08:52 blockdev_nvme_gpt -- common/autotest_common.sh@1657 -- # zoned_devs=() 00:10:42.866 18:08:52 blockdev_nvme_gpt -- common/autotest_common.sh@1657 -- # local -gA zoned_devs 00:10:42.866 18:08:52 blockdev_nvme_gpt -- common/autotest_common.sh@1658 -- # zoned_ctrls=() 00:10:42.866 18:08:52 blockdev_nvme_gpt -- common/autotest_common.sh@1658 -- # local -A zoned_ctrls 00:10:42.866 18:08:52 blockdev_nvme_gpt -- common/autotest_common.sh@1659 -- # local nvme bdf ns 00:10:42.866 18:08:52 blockdev_nvme_gpt -- common/autotest_common.sh@1668 -- # for nvme in /sys/class/nvme/nvme* 00:10:42.866 18:08:52 blockdev_nvme_gpt -- common/autotest_common.sh@1669 -- # bdf=0000:00:11.0 00:10:42.866 18:08:52 blockdev_nvme_gpt -- common/autotest_common.sh@1670 -- # for ns in "$nvme/"nvme*n* 00:10:42.866 18:08:52 blockdev_nvme_gpt -- common/autotest_common.sh@1671 -- # is_block_zoned nvme0n1 00:10:42.866 18:08:52 blockdev_nvme_gpt -- common/autotest_common.sh@1650 -- # local device=nvme0n1 00:10:42.866 18:08:52 blockdev_nvme_gpt -- common/autotest_common.sh@1652 -- # [[ -e /sys/block/nvme0n1/queue/zoned ]] 00:10:42.866 18:08:52 blockdev_nvme_gpt -- common/autotest_common.sh@1653 -- # [[ none != none ]] 00:10:42.866 18:08:52 blockdev_nvme_gpt -- common/autotest_common.sh@1668 -- # for nvme in /sys/class/nvme/nvme* 00:10:42.866 18:08:52 blockdev_nvme_gpt -- common/autotest_common.sh@1669 -- # bdf=0000:00:10.0 00:10:42.866 18:08:52 blockdev_nvme_gpt -- common/autotest_common.sh@1670 -- # for ns in "$nvme/"nvme*n* 00:10:42.866 18:08:52 blockdev_nvme_gpt -- common/autotest_common.sh@1671 -- # is_block_zoned nvme1n1 00:10:42.866 18:08:52 blockdev_nvme_gpt -- common/autotest_common.sh@1650 -- # local device=nvme1n1 00:10:42.866 18:08:52 blockdev_nvme_gpt -- common/autotest_common.sh@1652 -- # [[ -e /sys/block/nvme1n1/queue/zoned ]] 00:10:42.866 18:08:52 blockdev_nvme_gpt -- common/autotest_common.sh@1653 -- # [[ none != none ]] 00:10:42.866 18:08:52 blockdev_nvme_gpt -- common/autotest_common.sh@1668 -- # for nvme in /sys/class/nvme/nvme* 00:10:42.866 18:08:52 blockdev_nvme_gpt -- common/autotest_common.sh@1669 -- # bdf=0000:00:12.0 00:10:42.866 18:08:52 blockdev_nvme_gpt -- common/autotest_common.sh@1670 -- # for ns in "$nvme/"nvme*n* 00:10:42.866 18:08:52 blockdev_nvme_gpt -- common/autotest_common.sh@1671 -- # is_block_zoned nvme2n1 00:10:42.866 18:08:52 blockdev_nvme_gpt -- common/autotest_common.sh@1650 -- # local device=nvme2n1 00:10:42.866 18:08:52 blockdev_nvme_gpt -- common/autotest_common.sh@1652 -- # [[ -e /sys/block/nvme2n1/queue/zoned ]] 00:10:42.866 18:08:52 blockdev_nvme_gpt -- common/autotest_common.sh@1653 -- # [[ none != none ]] 00:10:42.866 18:08:52 blockdev_nvme_gpt -- common/autotest_common.sh@1670 -- # for ns in "$nvme/"nvme*n* 00:10:42.866 18:08:52 blockdev_nvme_gpt -- common/autotest_common.sh@1671 -- # is_block_zoned nvme2n2 00:10:42.866 18:08:52 blockdev_nvme_gpt -- common/autotest_common.sh@1650 -- # local device=nvme2n2 00:10:42.866 18:08:52 blockdev_nvme_gpt -- common/autotest_common.sh@1652 -- # [[ -e /sys/block/nvme2n2/queue/zoned ]] 00:10:42.866 18:08:52 blockdev_nvme_gpt -- common/autotest_common.sh@1653 -- # [[ none != none ]] 00:10:42.866 18:08:52 blockdev_nvme_gpt -- common/autotest_common.sh@1670 -- # for ns in "$nvme/"nvme*n* 00:10:42.866 18:08:52 blockdev_nvme_gpt -- common/autotest_common.sh@1671 -- # is_block_zoned nvme2n3 00:10:42.866 18:08:52 blockdev_nvme_gpt -- common/autotest_common.sh@1650 -- # local device=nvme2n3 00:10:42.866 18:08:52 blockdev_nvme_gpt -- common/autotest_common.sh@1652 -- # [[ -e /sys/block/nvme2n3/queue/zoned ]] 00:10:42.866 18:08:52 blockdev_nvme_gpt -- common/autotest_common.sh@1653 -- # [[ none != none ]] 00:10:42.866 18:08:52 blockdev_nvme_gpt -- common/autotest_common.sh@1668 -- # for nvme in /sys/class/nvme/nvme* 00:10:42.866 18:08:52 blockdev_nvme_gpt -- common/autotest_common.sh@1669 -- # bdf=0000:00:13.0 00:10:42.866 18:08:52 blockdev_nvme_gpt -- common/autotest_common.sh@1670 -- # for ns in "$nvme/"nvme*n* 00:10:42.866 18:08:52 blockdev_nvme_gpt -- common/autotest_common.sh@1671 -- # is_block_zoned nvme3c3n1 00:10:42.866 18:08:52 blockdev_nvme_gpt -- common/autotest_common.sh@1650 -- # local device=nvme3c3n1 00:10:42.866 18:08:52 blockdev_nvme_gpt -- common/autotest_common.sh@1652 -- # [[ -e /sys/block/nvme3c3n1/queue/zoned ]] 00:10:42.866 18:08:52 blockdev_nvme_gpt -- common/autotest_common.sh@1653 -- # [[ none != none ]] 00:10:42.866 18:08:52 blockdev_nvme_gpt -- bdev/blockdev.sh@106 -- # nvme_devs=('/sys/block/nvme0n1' '/sys/block/nvme1n1' '/sys/block/nvme2n1' '/sys/block/nvme2n2' '/sys/block/nvme2n3' '/sys/block/nvme3n1') 00:10:42.866 18:08:52 blockdev_nvme_gpt -- bdev/blockdev.sh@106 -- # local nvme_devs nvme_dev 00:10:42.866 18:08:52 blockdev_nvme_gpt -- bdev/blockdev.sh@107 -- # gpt_nvme= 00:10:42.866 18:08:52 blockdev_nvme_gpt -- bdev/blockdev.sh@109 -- # for nvme_dev in "${nvme_devs[@]}" 00:10:42.866 18:08:52 blockdev_nvme_gpt -- bdev/blockdev.sh@110 -- # [[ -z '' ]] 00:10:42.866 18:08:52 blockdev_nvme_gpt -- bdev/blockdev.sh@111 -- # dev=/dev/nvme0n1 00:10:42.866 18:08:52 blockdev_nvme_gpt -- bdev/blockdev.sh@112 -- # parted /dev/nvme0n1 -ms print 00:10:42.866 18:08:52 blockdev_nvme_gpt -- bdev/blockdev.sh@112 -- # pt='Error: /dev/nvme0n1: unrecognised disk label 00:10:42.866 BYT; 00:10:42.866 /dev/nvme0n1:5369MB:nvme:4096:4096:unknown:QEMU NVMe Ctrl:;' 00:10:42.866 18:08:52 blockdev_nvme_gpt -- bdev/blockdev.sh@113 -- # [[ Error: /dev/nvme0n1: unrecognised disk label 00:10:42.866 BYT; 00:10:42.867 /dev/nvme0n1:5369MB:nvme:4096:4096:unknown:QEMU NVMe Ctrl:; == *\/\d\e\v\/\n\v\m\e\0\n\1\:\ \u\n\r\e\c\o\g\n\i\s\e\d\ \d\i\s\k\ \l\a\b\e\l* ]] 00:10:42.867 18:08:52 blockdev_nvme_gpt -- bdev/blockdev.sh@114 -- # gpt_nvme=/dev/nvme0n1 00:10:42.867 18:08:52 blockdev_nvme_gpt -- bdev/blockdev.sh@115 -- # break 00:10:42.867 18:08:52 blockdev_nvme_gpt -- bdev/blockdev.sh@118 -- # [[ -n /dev/nvme0n1 ]] 00:10:42.867 18:08:52 blockdev_nvme_gpt -- bdev/blockdev.sh@123 -- # typeset -g g_unique_partguid=6f89f330-603b-4116-ac73-2ca8eae53030 00:10:42.867 18:08:52 blockdev_nvme_gpt -- bdev/blockdev.sh@124 -- # typeset -g g_unique_partguid_old=abf1734f-66e5-4c0f-aa29-4021d4d307df 00:10:42.867 18:08:52 blockdev_nvme_gpt -- bdev/blockdev.sh@127 -- # parted -s /dev/nvme0n1 mklabel gpt mkpart SPDK_TEST_first 0% 50% mkpart SPDK_TEST_second 50% 100% 00:10:42.867 18:08:52 blockdev_nvme_gpt -- bdev/blockdev.sh@129 -- # get_spdk_gpt_old 00:10:42.867 18:08:52 blockdev_nvme_gpt -- scripts/common.sh@411 -- # local spdk_guid 00:10:42.867 18:08:52 blockdev_nvme_gpt -- scripts/common.sh@413 -- # [[ -e /home/vagrant/spdk_repo/spdk/module/bdev/gpt/gpt.h ]] 00:10:42.867 18:08:52 blockdev_nvme_gpt -- scripts/common.sh@415 -- # GPT_H=/home/vagrant/spdk_repo/spdk/module/bdev/gpt/gpt.h 00:10:42.867 18:08:52 blockdev_nvme_gpt -- scripts/common.sh@416 -- # IFS='()' 00:10:42.867 18:08:52 blockdev_nvme_gpt -- scripts/common.sh@416 -- # read -r _ spdk_guid _ 00:10:42.867 18:08:52 blockdev_nvme_gpt -- scripts/common.sh@416 -- # grep -w SPDK_GPT_PART_TYPE_GUID_OLD /home/vagrant/spdk_repo/spdk/module/bdev/gpt/gpt.h 00:10:42.867 18:08:52 blockdev_nvme_gpt -- scripts/common.sh@417 -- # spdk_guid=0x7c5222bd-0x8f5d-0x4087-0x9c00-0xbf9843c7b58c 00:10:42.867 18:08:52 blockdev_nvme_gpt -- scripts/common.sh@417 -- # spdk_guid=7c5222bd-8f5d-4087-9c00-bf9843c7b58c 00:10:42.867 18:08:52 blockdev_nvme_gpt -- scripts/common.sh@419 -- # echo 7c5222bd-8f5d-4087-9c00-bf9843c7b58c 00:10:42.867 18:08:52 blockdev_nvme_gpt -- bdev/blockdev.sh@129 -- # SPDK_GPT_OLD_GUID=7c5222bd-8f5d-4087-9c00-bf9843c7b58c 00:10:42.867 18:08:52 blockdev_nvme_gpt -- bdev/blockdev.sh@130 -- # get_spdk_gpt 00:10:42.867 18:08:52 blockdev_nvme_gpt -- scripts/common.sh@423 -- # local spdk_guid 00:10:42.867 18:08:52 blockdev_nvme_gpt -- scripts/common.sh@425 -- # [[ -e /home/vagrant/spdk_repo/spdk/module/bdev/gpt/gpt.h ]] 00:10:42.867 18:08:52 blockdev_nvme_gpt -- scripts/common.sh@427 -- # GPT_H=/home/vagrant/spdk_repo/spdk/module/bdev/gpt/gpt.h 00:10:42.867 18:08:52 blockdev_nvme_gpt -- scripts/common.sh@428 -- # IFS='()' 00:10:42.867 18:08:52 blockdev_nvme_gpt -- scripts/common.sh@428 -- # read -r _ spdk_guid _ 00:10:42.867 18:08:52 blockdev_nvme_gpt -- scripts/common.sh@428 -- # grep -w SPDK_GPT_PART_TYPE_GUID /home/vagrant/spdk_repo/spdk/module/bdev/gpt/gpt.h 00:10:42.867 18:08:53 blockdev_nvme_gpt -- scripts/common.sh@429 -- # spdk_guid=0x6527994e-0x2c5a-0x4eec-0x9613-0x8f5944074e8b 00:10:42.867 18:08:53 blockdev_nvme_gpt -- scripts/common.sh@429 -- # spdk_guid=6527994e-2c5a-4eec-9613-8f5944074e8b 00:10:42.867 18:08:53 blockdev_nvme_gpt -- scripts/common.sh@431 -- # echo 6527994e-2c5a-4eec-9613-8f5944074e8b 00:10:42.867 18:08:53 blockdev_nvme_gpt -- bdev/blockdev.sh@130 -- # SPDK_GPT_GUID=6527994e-2c5a-4eec-9613-8f5944074e8b 00:10:42.867 18:08:53 blockdev_nvme_gpt -- bdev/blockdev.sh@131 -- # sgdisk -t 1:6527994e-2c5a-4eec-9613-8f5944074e8b -u 1:6f89f330-603b-4116-ac73-2ca8eae53030 /dev/nvme0n1 00:10:43.800 The operation has completed successfully. 00:10:43.800 18:08:54 blockdev_nvme_gpt -- bdev/blockdev.sh@132 -- # sgdisk -t 2:7c5222bd-8f5d-4087-9c00-bf9843c7b58c -u 2:abf1734f-66e5-4c0f-aa29-4021d4d307df /dev/nvme0n1 00:10:44.733 The operation has completed successfully. 00:10:44.733 18:08:55 blockdev_nvme_gpt -- bdev/blockdev.sh@133 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:10:45.302 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:10:46.238 0000:00:13.0 (1b36 0010): nvme -> uio_pci_generic 00:10:46.238 0000:00:10.0 (1b36 0010): nvme -> uio_pci_generic 00:10:46.238 0000:00:11.0 (1b36 0010): nvme -> uio_pci_generic 00:10:46.238 0000:00:12.0 (1b36 0010): nvme -> uio_pci_generic 00:10:46.238 18:08:56 blockdev_nvme_gpt -- bdev/blockdev.sh@134 -- # rpc_cmd bdev_get_bdevs 00:10:46.238 18:08:56 blockdev_nvme_gpt -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:46.238 18:08:56 blockdev_nvme_gpt -- common/autotest_common.sh@10 -- # set +x 00:10:46.238 [] 00:10:46.238 18:08:56 blockdev_nvme_gpt -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:46.238 18:08:56 blockdev_nvme_gpt -- bdev/blockdev.sh@135 -- # setup_nvme_conf 00:10:46.238 18:08:56 blockdev_nvme_gpt -- bdev/blockdev.sh@81 -- # local json 00:10:46.238 18:08:56 blockdev_nvme_gpt -- bdev/blockdev.sh@82 -- # mapfile -t json 00:10:46.238 18:08:56 blockdev_nvme_gpt -- bdev/blockdev.sh@82 -- # /home/vagrant/spdk_repo/spdk/scripts/gen_nvme.sh 00:10:46.238 18:08:56 blockdev_nvme_gpt -- bdev/blockdev.sh@83 -- # rpc_cmd load_subsystem_config -j ''\''{ "subsystem": "bdev", "config": [ { "method": "bdev_nvme_attach_controller", "params": { "trtype": "PCIe", "name":"Nvme0", "traddr":"0000:00:10.0" } },{ "method": "bdev_nvme_attach_controller", "params": { "trtype": "PCIe", "name":"Nvme1", "traddr":"0000:00:11.0" } },{ "method": "bdev_nvme_attach_controller", "params": { "trtype": "PCIe", "name":"Nvme2", "traddr":"0000:00:12.0" } },{ "method": "bdev_nvme_attach_controller", "params": { "trtype": "PCIe", "name":"Nvme3", "traddr":"0000:00:13.0" } } ] }'\''' 00:10:46.239 18:08:56 blockdev_nvme_gpt -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:46.239 18:08:56 blockdev_nvme_gpt -- common/autotest_common.sh@10 -- # set +x 00:10:46.808 18:08:57 blockdev_nvme_gpt -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:46.808 18:08:57 blockdev_nvme_gpt -- bdev/blockdev.sh@774 -- # rpc_cmd bdev_wait_for_examine 00:10:46.808 18:08:57 blockdev_nvme_gpt -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:46.808 18:08:57 blockdev_nvme_gpt -- common/autotest_common.sh@10 -- # set +x 00:10:46.808 18:08:57 blockdev_nvme_gpt -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:46.808 18:08:57 blockdev_nvme_gpt -- bdev/blockdev.sh@777 -- # cat 00:10:46.808 18:08:57 blockdev_nvme_gpt -- bdev/blockdev.sh@777 -- # rpc_cmd save_subsystem_config -n accel 00:10:46.808 18:08:57 blockdev_nvme_gpt -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:46.808 18:08:57 blockdev_nvme_gpt -- common/autotest_common.sh@10 -- # set +x 00:10:46.808 18:08:57 blockdev_nvme_gpt -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:46.808 18:08:57 blockdev_nvme_gpt -- bdev/blockdev.sh@777 -- # rpc_cmd save_subsystem_config -n bdev 00:10:46.808 18:08:57 blockdev_nvme_gpt -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:46.808 18:08:57 blockdev_nvme_gpt -- common/autotest_common.sh@10 -- # set +x 00:10:46.808 18:08:57 blockdev_nvme_gpt -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:46.808 18:08:57 blockdev_nvme_gpt -- bdev/blockdev.sh@777 -- # rpc_cmd save_subsystem_config -n iobuf 00:10:46.808 18:08:57 blockdev_nvme_gpt -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:46.808 18:08:57 blockdev_nvme_gpt -- common/autotest_common.sh@10 -- # set +x 00:10:46.808 18:08:57 blockdev_nvme_gpt -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:46.808 18:08:57 blockdev_nvme_gpt -- bdev/blockdev.sh@785 -- # mapfile -t bdevs 00:10:46.808 18:08:57 blockdev_nvme_gpt -- bdev/blockdev.sh@785 -- # jq -r '.[] | select(.claimed == false)' 00:10:46.808 18:08:57 blockdev_nvme_gpt -- bdev/blockdev.sh@785 -- # rpc_cmd bdev_get_bdevs 00:10:46.808 18:08:57 blockdev_nvme_gpt -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:46.808 18:08:57 blockdev_nvme_gpt -- common/autotest_common.sh@10 -- # set +x 00:10:46.808 18:08:57 blockdev_nvme_gpt -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:46.808 18:08:57 blockdev_nvme_gpt -- bdev/blockdev.sh@786 -- # mapfile -t bdevs_name 00:10:46.808 18:08:57 blockdev_nvme_gpt -- bdev/blockdev.sh@786 -- # jq -r .name 00:10:46.809 18:08:57 blockdev_nvme_gpt -- bdev/blockdev.sh@786 -- # printf '%s\n' '{' ' "name": "Nvme0n1",' ' "aliases": [' ' "6613ef58-d10e-4d91-a5d6-8012dbbcec75"' ' ],' ' "product_name": "NVMe disk",' ' "block_size": 4096,' ' "num_blocks": 1548666,' ' "uuid": "6613ef58-d10e-4d91-a5d6-8012dbbcec75",' ' "numa_id": -1,' ' "md_size": 64,' ' "md_interleave": false,' ' "dif_type": 0,' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "flush": true,' ' "reset": true,' ' "nvme_admin": true,' ' "nvme_io": true,' ' "nvme_io_md": true,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": true,' ' "compare_and_write": false,' ' "abort": true,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": true,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {' ' "nvme": [' ' {' ' "pci_address": "0000:00:10.0",' ' "trid": {' ' "trtype": "PCIe",' ' "traddr": "0000:00:10.0"' ' },' ' "ctrlr_data": {' ' "cntlid": 0,' ' "vendor_id": "0x1b36",' ' "model_number": "QEMU NVMe Ctrl",' ' "serial_number": "12340",' ' "firmware_revision": "8.0.0",' ' "subnqn": "nqn.2019-08.org.qemu:12340",' ' "oacs": {' ' "security": 0,' ' "format": 1,' ' "firmware": 0,' ' "ns_manage": 1' ' },' ' "multi_ctrlr": false,' ' "ana_reporting": false' ' },' ' "vs": {' ' "nvme_version": "1.4"' ' },' ' "ns_data": {' ' "id": 1,' ' "can_share": false' ' }' ' }' ' ],' ' "mp_policy": "active_passive"' ' }' '}' '{' ' "name": "Nvme1n1p1",' ' "aliases": [' ' "6f89f330-603b-4116-ac73-2ca8eae53030"' ' ],' ' "product_name": "GPT Disk",' ' "block_size": 4096,' ' "num_blocks": 655104,' ' "uuid": "6f89f330-603b-4116-ac73-2ca8eae53030",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "flush": true,' ' "reset": true,' ' "nvme_admin": false,' ' "nvme_io": false,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": true,' ' "compare_and_write": false,' ' "abort": true,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": true,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {' ' "gpt": {' ' "base_bdev": "Nvme1n1",' ' "offset_blocks": 256,' ' "partition_type_guid": "6527994e-2c5a-4eec-9613-8f5944074e8b",' ' "unique_partition_guid": "6f89f330-603b-4116-ac73-2ca8eae53030",' ' "partition_name": "SPDK_TEST_first"' ' }' ' }' '}' '{' ' "name": "Nvme1n1p2",' ' "aliases": [' ' "abf1734f-66e5-4c0f-aa29-4021d4d307df"' ' ],' ' "product_name": "GPT Disk",' ' "block_size": 4096,' ' "num_blocks": 655103,' ' "uuid": "abf1734f-66e5-4c0f-aa29-4021d4d307df",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "flush": true,' ' "reset": true,' ' "nvme_admin": false,' ' "nvme_io": false,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": true,' ' "compare_and_write": false,' ' "abort": true,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": true,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {' ' "gpt": {' ' "base_bdev": "Nvme1n1",' ' "offset_blocks": 655360,' ' "partition_type_guid": "7c5222bd-8f5d-4087-9c00-bf9843c7b58c",' ' "unique_partition_guid": "abf1734f-66e5-4c0f-aa29-4021d4d307df",' ' "partition_name": "SPDK_TEST_second"' ' }' ' }' '}' '{' ' "name": "Nvme2n1",' ' "aliases": [' ' "f6c3c08c-ca85-4687-9d99-8cd5e4b0c5f9"' ' ],' ' "product_name": "NVMe disk",' ' "block_size": 4096,' ' "num_blocks": 1048576,' ' "uuid": "f6c3c08c-ca85-4687-9d99-8cd5e4b0c5f9",' ' "numa_id": -1,' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "flush": true,' ' "reset": true,' ' "nvme_admin": true,' ' "nvme_io": true,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": true,' ' "compare_and_write": false,' ' "abort": true,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": true,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {' ' "nvme": [' ' {' ' "pci_address": "0000:00:12.0",' ' "trid": {' ' "trtype": "PCIe",' ' "traddr": "0000:00:12.0"' ' },' ' "ctrlr_data": {' ' "cntlid": 0,' ' "vendor_id": "0x1b36",' ' "model_number": "QEMU NVMe Ctrl",' ' "serial_number": "12342",' ' "firmware_revision": "8.0.0",' ' "subnqn": "nqn.2019-08.org.qemu:12342",' ' "oacs": {' ' "security": 0,' ' "format": 1,' ' "firmware": 0,' ' "ns_manage": 1' ' },' ' "multi_ctrlr": false,' ' "ana_reporting": false' ' },' ' "vs": {' ' "nvme_version": "1.4"' ' },' ' "ns_data": {' ' "id": 1,' ' "can_share": false' ' }' ' }' ' ],' ' "mp_policy": "active_passive"' ' }' '}' '{' ' "name": "Nvme2n2",' ' "aliases": [' ' "e7985785-2719-4037-99d7-1aa2d1d3e18f"' ' ],' ' "product_name": "NVMe disk",' ' "block_size": 4096,' ' "num_blocks": 1048576,' ' "uuid": "e7985785-2719-4037-99d7-1aa2d1d3e18f",' ' "numa_id": -1,' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "flush": true,' ' "reset": true,' ' "nvme_admin": true,' ' "nvme_io": true,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": true,' ' "compare_and_write": false,' ' "abort": true,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": true,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {' ' "nvme": [' ' {' ' "pci_address": "0000:00:12.0",' ' "trid": {' ' "trtype": "PCIe",' ' "traddr": "0000:00:12.0"' ' },' ' "ctrlr_data": {' ' "cntlid": 0,' ' "vendor_id": "0x1b36",' ' "model_number": "QEMU NVMe Ctrl",' ' "serial_number": "12342",' ' "firmware_revision": "8.0.0",' ' "subnqn": "nqn.2019-08.org.qemu:12342",' ' "oacs": {' ' "security": 0,' ' "format": 1,' ' "firmware": 0,' ' "ns_manage": 1' ' },' ' "multi_ctrlr": false,' ' "ana_reporting": false' ' },' ' "vs": {' ' "nvme_version": "1.4"' ' },' ' "ns_data": {' ' "id": 2,' ' "can_share": false' ' }' ' }' ' ],' ' "mp_policy": "active_passive"' ' }' '}' '{' ' "name": "Nvme2n3",' ' "aliases": [' ' "5bd68480-2ed1-4f11-9e56-ffca87b3e83e"' ' ],' ' "product_name": "NVMe disk",' ' "block_size": 4096,' ' "num_blocks": 1048576,' ' "uuid": "5bd68480-2ed1-4f11-9e56-ffca87b3e83e",' ' "numa_id": -1,' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "flush": true,' ' "reset": true,' ' "nvme_admin": true,' ' "nvme_io": true,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": true,' ' "compare_and_write": false,' ' "abort": true,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": true,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {' ' "nvme": [' ' {' ' "pci_address": "0000:00:12.0",' ' "trid": {' ' "trtype": "PCIe",' ' "traddr": "0000:00:12.0"' ' },' ' "ctrlr_data": {' ' "cntlid": 0,' ' "vendor_id": "0x1b36",' ' "model_number": "QEMU NVMe Ctrl",' ' "serial_number": "12342",' ' "firmware_revision": "8.0.0",' ' "subnqn": "nqn.2019-08.org.qemu:12342",' ' "oacs": {' ' "security": 0,' ' "format": 1,' ' "firmware": 0,' ' "ns_manage": 1' ' },' ' "multi_ctrlr": false,' ' "ana_reporting": false' ' },' ' "vs": {' ' "nvme_version": "1.4"' ' },' ' "ns_data": {' ' "id": 3,' ' "can_share": false' ' }' ' }' ' ],' ' "mp_policy": "active_passive"' ' }' '}' '{' ' "name": "Nvme3n1",' ' "aliases": [' ' "cbd16b5e-49fc-428a-8971-5d6559731db7"' ' ],' ' "product_name": "NVMe disk",' ' "block_size": 4096,' ' "num_blocks": 262144,' ' "uuid": "cbd16b5e-49fc-428a-8971-5d6559731db7",' ' "numa_id": -1,' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "flush": true,' ' "reset": true,' ' "nvme_admin": true,' ' "nvme_io": true,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": true,' ' "compare_and_write": false,' ' "abort": true,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": true,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {' ' "nvme": [' ' {' ' "pci_address": "0000:00:13.0",' ' "trid": {' ' "trtype": "PCIe",' ' "traddr": "0000:00:13.0"' ' },' ' "ctrlr_data": {' ' "cntlid": 0,' ' "vendor_id": "0x1b36",' ' "model_number": "QEMU NVMe Ctrl",' ' "serial_number": "12343",' ' "firmware_revision": "8.0.0",' ' "subnqn": "nqn.2019-08.org.qemu:fdp-subsys3",' ' "oacs": {' ' "security": 0,' ' "format": 1,' ' "firmware": 0,' ' "ns_manage": 1' ' },' ' "multi_ctrlr": true,' ' "ana_reporting": false' ' },' ' "vs": {' ' "nvme_version": "1.4"' ' },' ' "ns_data": {' ' "id": 1,' ' "can_share": true' ' }' ' }' ' ],' ' "mp_policy": "active_passive"' ' }' '}' 00:10:46.809 18:08:57 blockdev_nvme_gpt -- bdev/blockdev.sh@787 -- # bdev_list=("${bdevs_name[@]}") 00:10:46.809 18:08:57 blockdev_nvme_gpt -- bdev/blockdev.sh@789 -- # hello_world_bdev=Nvme0n1 00:10:46.809 18:08:57 blockdev_nvme_gpt -- bdev/blockdev.sh@790 -- # trap - SIGINT SIGTERM EXIT 00:10:46.809 18:08:57 blockdev_nvme_gpt -- bdev/blockdev.sh@791 -- # killprocess 61928 00:10:46.809 18:08:57 blockdev_nvme_gpt -- common/autotest_common.sh@954 -- # '[' -z 61928 ']' 00:10:46.809 18:08:57 blockdev_nvme_gpt -- common/autotest_common.sh@958 -- # kill -0 61928 00:10:47.068 18:08:57 blockdev_nvme_gpt -- common/autotest_common.sh@959 -- # uname 00:10:47.068 18:08:57 blockdev_nvme_gpt -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:10:47.068 18:08:57 blockdev_nvme_gpt -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 61928 00:10:47.068 18:08:57 blockdev_nvme_gpt -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:10:47.068 18:08:57 blockdev_nvme_gpt -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:10:47.068 killing process with pid 61928 00:10:47.068 18:08:57 blockdev_nvme_gpt -- common/autotest_common.sh@972 -- # echo 'killing process with pid 61928' 00:10:47.068 18:08:57 blockdev_nvme_gpt -- common/autotest_common.sh@973 -- # kill 61928 00:10:47.068 18:08:57 blockdev_nvme_gpt -- common/autotest_common.sh@978 -- # wait 61928 00:10:49.601 18:08:59 blockdev_nvme_gpt -- bdev/blockdev.sh@795 -- # trap cleanup SIGINT SIGTERM EXIT 00:10:49.601 18:08:59 blockdev_nvme_gpt -- bdev/blockdev.sh@797 -- # run_test bdev_hello_world /home/vagrant/spdk_repo/spdk/build/examples/hello_bdev --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -b Nvme0n1 '' 00:10:49.601 18:08:59 blockdev_nvme_gpt -- common/autotest_common.sh@1105 -- # '[' 7 -le 1 ']' 00:10:49.601 18:08:59 blockdev_nvme_gpt -- common/autotest_common.sh@1111 -- # xtrace_disable 00:10:49.601 18:08:59 blockdev_nvme_gpt -- common/autotest_common.sh@10 -- # set +x 00:10:49.601 ************************************ 00:10:49.601 START TEST bdev_hello_world 00:10:49.601 ************************************ 00:10:49.601 18:08:59 blockdev_nvme_gpt.bdev_hello_world -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/build/examples/hello_bdev --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -b Nvme0n1 '' 00:10:49.601 [2024-12-06 18:08:59.915291] Starting SPDK v25.01-pre git sha1 1148849d6 / DPDK 24.03.0 initialization... 00:10:49.601 [2024-12-06 18:08:59.915418] [ DPDK EAL parameters: hello_bdev --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid62572 ] 00:10:49.601 [2024-12-06 18:09:00.098340] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:10:49.859 [2024-12-06 18:09:00.215524] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:10:50.426 [2024-12-06 18:09:00.891712] hello_bdev.c: 222:hello_start: *NOTICE*: Successfully started the application 00:10:50.426 [2024-12-06 18:09:00.891780] hello_bdev.c: 231:hello_start: *NOTICE*: Opening the bdev Nvme0n1 00:10:50.426 [2024-12-06 18:09:00.891822] hello_bdev.c: 244:hello_start: *NOTICE*: Opening io channel 00:10:50.426 [2024-12-06 18:09:00.894788] hello_bdev.c: 138:hello_write: *NOTICE*: Writing to the bdev 00:10:50.426 [2024-12-06 18:09:00.895193] hello_bdev.c: 117:write_complete: *NOTICE*: bdev io write completed successfully 00:10:50.426 [2024-12-06 18:09:00.895225] hello_bdev.c: 84:hello_read: *NOTICE*: Reading io 00:10:50.426 [2024-12-06 18:09:00.895403] hello_bdev.c: 65:read_complete: *NOTICE*: Read string from bdev : Hello World! 00:10:50.426 00:10:50.426 [2024-12-06 18:09:00.895426] hello_bdev.c: 74:read_complete: *NOTICE*: Stopping app 00:10:51.871 00:10:51.871 real 0m2.204s 00:10:51.871 user 0m1.832s 00:10:51.871 sys 0m0.264s 00:10:51.871 18:09:02 blockdev_nvme_gpt.bdev_hello_world -- common/autotest_common.sh@1130 -- # xtrace_disable 00:10:51.871 18:09:02 blockdev_nvme_gpt.bdev_hello_world -- common/autotest_common.sh@10 -- # set +x 00:10:51.871 ************************************ 00:10:51.871 END TEST bdev_hello_world 00:10:51.871 ************************************ 00:10:51.871 18:09:02 blockdev_nvme_gpt -- bdev/blockdev.sh@798 -- # run_test bdev_bounds bdev_bounds '' 00:10:51.871 18:09:02 blockdev_nvme_gpt -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:10:51.871 18:09:02 blockdev_nvme_gpt -- common/autotest_common.sh@1111 -- # xtrace_disable 00:10:51.871 18:09:02 blockdev_nvme_gpt -- common/autotest_common.sh@10 -- # set +x 00:10:51.871 ************************************ 00:10:51.871 START TEST bdev_bounds 00:10:51.871 ************************************ 00:10:51.871 18:09:02 blockdev_nvme_gpt.bdev_bounds -- common/autotest_common.sh@1129 -- # bdev_bounds '' 00:10:51.871 18:09:02 blockdev_nvme_gpt.bdev_bounds -- bdev/blockdev.sh@289 -- # bdevio_pid=62614 00:10:51.871 18:09:02 blockdev_nvme_gpt.bdev_bounds -- bdev/blockdev.sh@288 -- # /home/vagrant/spdk_repo/spdk/test/bdev/bdevio/bdevio -w -s 0 --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json '' 00:10:51.871 18:09:02 blockdev_nvme_gpt.bdev_bounds -- bdev/blockdev.sh@290 -- # trap 'cleanup; killprocess $bdevio_pid; exit 1' SIGINT SIGTERM EXIT 00:10:51.871 Process bdevio pid: 62614 00:10:51.871 18:09:02 blockdev_nvme_gpt.bdev_bounds -- bdev/blockdev.sh@291 -- # echo 'Process bdevio pid: 62614' 00:10:51.871 18:09:02 blockdev_nvme_gpt.bdev_bounds -- bdev/blockdev.sh@292 -- # waitforlisten 62614 00:10:51.871 18:09:02 blockdev_nvme_gpt.bdev_bounds -- common/autotest_common.sh@835 -- # '[' -z 62614 ']' 00:10:51.871 18:09:02 blockdev_nvme_gpt.bdev_bounds -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:10:51.871 18:09:02 blockdev_nvme_gpt.bdev_bounds -- common/autotest_common.sh@840 -- # local max_retries=100 00:10:51.871 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:10:51.871 18:09:02 blockdev_nvme_gpt.bdev_bounds -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:10:51.871 18:09:02 blockdev_nvme_gpt.bdev_bounds -- common/autotest_common.sh@844 -- # xtrace_disable 00:10:51.871 18:09:02 blockdev_nvme_gpt.bdev_bounds -- common/autotest_common.sh@10 -- # set +x 00:10:51.871 [2024-12-06 18:09:02.191328] Starting SPDK v25.01-pre git sha1 1148849d6 / DPDK 24.03.0 initialization... 00:10:51.871 [2024-12-06 18:09:02.191451] [ DPDK EAL parameters: bdevio --no-shconf -c 0x7 -m 0 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid62614 ] 00:10:51.871 [2024-12-06 18:09:02.374867] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:10:52.129 [2024-12-06 18:09:02.494977] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:10:52.129 [2024-12-06 18:09:02.495132] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:10:52.129 [2024-12-06 18:09:02.495163] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:10:52.695 18:09:03 blockdev_nvme_gpt.bdev_bounds -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:10:52.695 18:09:03 blockdev_nvme_gpt.bdev_bounds -- common/autotest_common.sh@868 -- # return 0 00:10:52.695 18:09:03 blockdev_nvme_gpt.bdev_bounds -- bdev/blockdev.sh@293 -- # /home/vagrant/spdk_repo/spdk/test/bdev/bdevio/tests.py perform_tests 00:10:52.953 I/O targets: 00:10:52.953 Nvme0n1: 1548666 blocks of 4096 bytes (6050 MiB) 00:10:52.953 Nvme1n1p1: 655104 blocks of 4096 bytes (2559 MiB) 00:10:52.953 Nvme1n1p2: 655103 blocks of 4096 bytes (2559 MiB) 00:10:52.953 Nvme2n1: 1048576 blocks of 4096 bytes (4096 MiB) 00:10:52.953 Nvme2n2: 1048576 blocks of 4096 bytes (4096 MiB) 00:10:52.953 Nvme2n3: 1048576 blocks of 4096 bytes (4096 MiB) 00:10:52.953 Nvme3n1: 262144 blocks of 4096 bytes (1024 MiB) 00:10:52.953 00:10:52.953 00:10:52.953 CUnit - A unit testing framework for C - Version 2.1-3 00:10:52.953 http://cunit.sourceforge.net/ 00:10:52.953 00:10:52.953 00:10:52.953 Suite: bdevio tests on: Nvme3n1 00:10:52.953 Test: blockdev write read block ...passed 00:10:52.953 Test: blockdev write zeroes read block ...passed 00:10:52.953 Test: blockdev write zeroes read no split ...passed 00:10:52.953 Test: blockdev write zeroes read split ...passed 00:10:52.953 Test: blockdev write zeroes read split partial ...passed 00:10:52.953 Test: blockdev reset ...[2024-12-06 18:09:03.395249] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [0000:00:13.0, 0] resetting controller 00:10:52.954 [2024-12-06 18:09:03.399206] bdev_nvme.c:2287:bdev_nvme_reset_ctrlr_complete: *NOTICE*: [0000:00:13.0, 0] Resetting controller successful. 00:10:52.954 passed 00:10:52.954 Test: blockdev write read 8 blocks ...passed 00:10:52.954 Test: blockdev write read size > 128k ...passed 00:10:52.954 Test: blockdev write read invalid size ...passed 00:10:52.954 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:10:52.954 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:10:52.954 Test: blockdev write read max offset ...passed 00:10:52.954 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:10:52.954 Test: blockdev writev readv 8 blocks ...passed 00:10:52.954 Test: blockdev writev readv 30 x 1block ...passed 00:10:52.954 Test: blockdev writev readv block ...passed 00:10:52.954 Test: blockdev writev readv size > 128k ...passed 00:10:52.954 Test: blockdev writev readv size > 128k in two iovs ...passed 00:10:52.954 Test: blockdev comparev and writev ...[2024-12-06 18:09:03.408883] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:190 nsid:1 lba:0 len:1 SGL DATA BLOCK ADDRESS 0x2b2c04000 len:0x1000 00:10:52.954 [2024-12-06 18:09:03.409081] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:190 cdw0:0 sqhd:0018 p:1 m:0 dnr:1 00:10:52.954 passed 00:10:52.954 Test: blockdev nvme passthru rw ...passed 00:10:52.954 Test: blockdev nvme passthru vendor specific ...[2024-12-06 18:09:03.410288] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:190 PRP1 0x0 PRP2 0x0 00:10:52.954 [2024-12-06 18:09:03.410473] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:190 cdw0:0passed 00:10:52.954 Test: blockdev nvme admin passthru ... sqhd:001c p:1 m:0 dnr:1 00:10:52.954 passed 00:10:52.954 Test: blockdev copy ...passed 00:10:52.954 Suite: bdevio tests on: Nvme2n3 00:10:52.954 Test: blockdev write read block ...passed 00:10:52.954 Test: blockdev write zeroes read block ...passed 00:10:52.954 Test: blockdev write zeroes read no split ...passed 00:10:52.954 Test: blockdev write zeroes read split ...passed 00:10:52.954 Test: blockdev write zeroes read split partial ...passed 00:10:52.954 Test: blockdev reset ...[2024-12-06 18:09:03.493489] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [0000:00:12.0, 0] resetting controller 00:10:52.954 [2024-12-06 18:09:03.497718] bdev_nvme.c:2287:bdev_nvme_reset_ctrlr_complete: *NOTICE*: [0000:00:12.0, 0] Resetting controller successful. 00:10:52.954 passed 00:10:52.954 Test: blockdev write read 8 blocks ...passed 00:10:52.954 Test: blockdev write read size > 128k ...passed 00:10:52.954 Test: blockdev write read invalid size ...passed 00:10:52.954 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:10:52.954 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:10:52.954 Test: blockdev write read max offset ...passed 00:10:52.954 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:10:52.954 Test: blockdev writev readv 8 blocks ...passed 00:10:52.954 Test: blockdev writev readv 30 x 1block ...passed 00:10:52.954 Test: blockdev writev readv block ...passed 00:10:52.954 Test: blockdev writev readv size > 128k ...passed 00:10:52.954 Test: blockdev writev readv size > 128k in two iovs ...passed 00:10:52.954 Test: blockdev comparev and writev ...[2024-12-06 18:09:03.506797] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:190 nsid:3 lba:0 len:1 SGL DATA BLOCK ADDRESS 0x2b2c02000 len:0x1000 00:10:52.954 [2024-12-06 18:09:03.506854] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:190 cdw0:0 sqhd:0018 p:1 m:0 dnr:1 00:10:52.954 passed 00:10:52.954 Test: blockdev nvme passthru rw ...passed 00:10:52.954 Test: blockdev nvme passthru vendor specific ...passed 00:10:52.954 Test: blockdev nvme admin passthru ...[2024-12-06 18:09:03.508289] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC RESERVED / VENDOR SPECIFIC qid:1 cid:190 PRP1 0x0 PRP2 0x0 00:10:52.954 [2024-12-06 18:09:03.508339] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:190 cdw0:0 sqhd:001c p:1 m:0 dnr:1 00:10:52.954 passed 00:10:52.954 Test: blockdev copy ...passed 00:10:52.954 Suite: bdevio tests on: Nvme2n2 00:10:52.954 Test: blockdev write read block ...passed 00:10:52.954 Test: blockdev write zeroes read block ...passed 00:10:52.954 Test: blockdev write zeroes read no split ...passed 00:10:53.212 Test: blockdev write zeroes read split ...passed 00:10:53.212 Test: blockdev write zeroes read split partial ...passed 00:10:53.212 Test: blockdev reset ...[2024-12-06 18:09:03.591374] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [0000:00:12.0, 0] resetting controller 00:10:53.212 [2024-12-06 18:09:03.595681] bdev_nvme.c:2287:bdev_nvme_reset_ctrlr_complete: *NOTICE*: [0000:00:12.0, 0] Resetting controller successful. 00:10:53.212 passed 00:10:53.212 Test: blockdev write read 8 blocks ...passed 00:10:53.212 Test: blockdev write read size > 128k ...passed 00:10:53.212 Test: blockdev write read invalid size ...passed 00:10:53.212 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:10:53.212 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:10:53.212 Test: blockdev write read max offset ...passed 00:10:53.212 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:10:53.212 Test: blockdev writev readv 8 blocks ...passed 00:10:53.212 Test: blockdev writev readv 30 x 1block ...passed 00:10:53.212 Test: blockdev writev readv block ...passed 00:10:53.212 Test: blockdev writev readv size > 128k ...passed 00:10:53.212 Test: blockdev writev readv size > 128k in two iovs ...passed 00:10:53.212 Test: blockdev comparev and writev ...[2024-12-06 18:09:03.604568] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:190 nsid:2 lba:0 len:1 SGL DATA BLOCK ADDRESS 0x2c7238000 len:0x1000 00:10:53.212 [2024-12-06 18:09:03.604625] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:190 cdw0:0 sqhd:0018 p:1 m:0 dnr:1 00:10:53.212 passed 00:10:53.212 Test: blockdev nvme passthru rw ...passed 00:10:53.212 Test: blockdev nvme passthru vendor specific ...[2024-12-06 18:09:03.605588] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC RESERVED / VENDOR SPECIFIC qid:1 cid:190 PRP1 0x0 PRP2 0x0 00:10:53.212 [2024-12-06 18:09:03.605631] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:190 cdw0:0 sqhd:001c p:1 m:0 dnr:1 00:10:53.212 passed 00:10:53.212 Test: blockdev nvme admin passthru ...passed 00:10:53.212 Test: blockdev copy ...passed 00:10:53.212 Suite: bdevio tests on: Nvme2n1 00:10:53.212 Test: blockdev write read block ...passed 00:10:53.212 Test: blockdev write zeroes read block ...passed 00:10:53.212 Test: blockdev write zeroes read no split ...passed 00:10:53.212 Test: blockdev write zeroes read split ...passed 00:10:53.212 Test: blockdev write zeroes read split partial ...passed 00:10:53.212 Test: blockdev reset ...[2024-12-06 18:09:03.686316] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [0000:00:12.0, 0] resetting controller 00:10:53.212 [2024-12-06 18:09:03.692132] bdev_nvme.c:2287:bdev_nvme_reset_ctrlr_complete: *NOTICE*: [0000:00:12.0, 0] Resetting controller successful. 00:10:53.212 passed 00:10:53.212 Test: blockdev write read 8 blocks ...passed 00:10:53.212 Test: blockdev write read size > 128k ...passed 00:10:53.212 Test: blockdev write read invalid size ...passed 00:10:53.212 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:10:53.212 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:10:53.212 Test: blockdev write read max offset ...passed 00:10:53.212 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:10:53.212 Test: blockdev writev readv 8 blocks ...passed 00:10:53.212 Test: blockdev writev readv 30 x 1block ...passed 00:10:53.212 Test: blockdev writev readv block ...passed 00:10:53.212 Test: blockdev writev readv size > 128k ...passed 00:10:53.212 Test: blockdev writev readv size > 128k in two iovs ...passed 00:10:53.212 Test: blockdev comparev and writev ...[2024-12-06 18:09:03.707364] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:190 nsid:1 lba:0 len:1 SGL DATA BLOCK ADDRESS 0x2c7234000 len:0x1000 00:10:53.212 [2024-12-06 18:09:03.707433] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:190 cdw0:0 sqhd:0018 p:1 m:0 dnr:1 00:10:53.212 passed 00:10:53.212 Test: blockdev nvme passthru rw ...passed 00:10:53.212 Test: blockdev nvme passthru vendor specific ...passed 00:10:53.212 Test: blockdev nvme admin passthru ...[2024-12-06 18:09:03.708641] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:190 PRP1 0x0 PRP2 0x0 00:10:53.212 [2024-12-06 18:09:03.708681] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:190 cdw0:0 sqhd:001c p:1 m:0 dnr:1 00:10:53.212 passed 00:10:53.212 Test: blockdev copy ...passed 00:10:53.212 Suite: bdevio tests on: Nvme1n1p2 00:10:53.212 Test: blockdev write read block ...passed 00:10:53.212 Test: blockdev write zeroes read block ...passed 00:10:53.212 Test: blockdev write zeroes read no split ...passed 00:10:53.470 Test: blockdev write zeroes read split ...passed 00:10:53.470 Test: blockdev write zeroes read split partial ...passed 00:10:53.470 Test: blockdev reset ...[2024-12-06 18:09:03.825235] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [0000:00:11.0, 0] resetting controller 00:10:53.470 [2024-12-06 18:09:03.829426] bdev_nvme.c:2287:bdev_nvme_reset_ctrlr_complete: *NOTICE*: [0000:00:11.0, 0] Resetting controller successful. 00:10:53.470 passed 00:10:53.470 Test: blockdev write read 8 blocks ...passed 00:10:53.470 Test: blockdev write read size > 128k ...passed 00:10:53.470 Test: blockdev write read invalid size ...passed 00:10:53.470 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:10:53.470 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:10:53.470 Test: blockdev write read max offset ...passed 00:10:53.470 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:10:53.471 Test: blockdev writev readv 8 blocks ...passed 00:10:53.471 Test: blockdev writev readv 30 x 1block ...passed 00:10:53.471 Test: blockdev writev readv block ...passed 00:10:53.471 Test: blockdev writev readv size > 128k ...passed 00:10:53.471 Test: blockdev writev readv size > 128k in two iovs ...passed 00:10:53.471 Test: blockdev comparev and writev ...[2024-12-06 18:09:03.839046] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:190 nsid:1 lba:655360 len:1 SGL DATA BLOCK ADDRESS 0x2c7230000 len:0x1000 00:10:53.471 [2024-12-06 18:09:03.839117] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:190 cdw0:0 sqhd:0018 p:1 m:0 dnr:1 00:10:53.471 passed 00:10:53.471 Test: blockdev nvme passthru rw ...passed 00:10:53.471 Test: blockdev nvme passthru vendor specific ...passed 00:10:53.471 Test: blockdev nvme admin passthru ...passed 00:10:53.471 Test: blockdev copy ...passed 00:10:53.471 Suite: bdevio tests on: Nvme1n1p1 00:10:53.471 Test: blockdev write read block ...passed 00:10:53.471 Test: blockdev write zeroes read block ...passed 00:10:53.471 Test: blockdev write zeroes read no split ...passed 00:10:53.471 Test: blockdev write zeroes read split ...passed 00:10:53.471 Test: blockdev write zeroes read split partial ...passed 00:10:53.471 Test: blockdev reset ...[2024-12-06 18:09:03.926499] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [0000:00:11.0, 0] resetting controller 00:10:53.471 [2024-12-06 18:09:03.930667] bdev_nvme.c:2287:bdev_nvme_reset_ctrlr_complete: *NOTICE*: [0000:00:11.0, 0] Resetting controller successful. 00:10:53.471 passed 00:10:53.471 Test: blockdev write read 8 blocks ...passed 00:10:53.471 Test: blockdev write read size > 128k ...passed 00:10:53.471 Test: blockdev write read invalid size ...passed 00:10:53.471 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:10:53.471 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:10:53.471 Test: blockdev write read max offset ...passed 00:10:53.471 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:10:53.471 Test: blockdev writev readv 8 blocks ...passed 00:10:53.471 Test: blockdev writev readv 30 x 1block ...passed 00:10:53.471 Test: blockdev writev readv block ...passed 00:10:53.471 Test: blockdev writev readv size > 128k ...passed 00:10:53.471 Test: blockdev writev readv size > 128k in two iovs ...passed 00:10:53.471 Test: blockdev comparev and writev ...[2024-12-06 18:09:03.939773] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:190 nsid:1 lba:256 len:1 SGL DATA BLOCK ADDRESS 0x2b360e000 len:0x1000 00:10:53.471 [2024-12-06 18:09:03.939821] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:190 cdw0:0 sqhd:0018 p:1 m:0 dnr:1 00:10:53.471 passed 00:10:53.471 Test: blockdev nvme passthru rw ...passed 00:10:53.471 Test: blockdev nvme passthru vendor specific ...passed 00:10:53.471 Test: blockdev nvme admin passthru ...passed 00:10:53.471 Test: blockdev copy ...passed 00:10:53.471 Suite: bdevio tests on: Nvme0n1 00:10:53.471 Test: blockdev write read block ...passed 00:10:53.471 Test: blockdev write zeroes read block ...passed 00:10:53.471 Test: blockdev write zeroes read no split ...passed 00:10:53.471 Test: blockdev write zeroes read split ...passed 00:10:53.471 Test: blockdev write zeroes read split partial ...passed 00:10:53.471 Test: blockdev reset ...[2024-12-06 18:09:04.024828] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [0000:00:10.0, 0] resetting controller 00:10:53.471 [2024-12-06 18:09:04.028700] bdev_nvme.c:2287:bdev_nvme_reset_ctrlr_complete: *NOTICE*: [0000:00:10.0, 0] Resetting controller successful. 00:10:53.471 passed 00:10:53.471 Test: blockdev write read 8 blocks ...passed 00:10:53.471 Test: blockdev write read size > 128k ...passed 00:10:53.471 Test: blockdev write read invalid size ...passed 00:10:53.471 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:10:53.471 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:10:53.471 Test: blockdev write read max offset ...passed 00:10:53.471 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:10:53.471 Test: blockdev writev readv 8 blocks ...passed 00:10:53.471 Test: blockdev writev readv 30 x 1block ...passed 00:10:53.471 Test: blockdev writev readv block ...passed 00:10:53.471 Test: blockdev writev readv size > 128k ...passed 00:10:53.471 Test: blockdev writev readv size > 128k in two iovs ...passed 00:10:53.471 Test: blockdev comparev and writev ...passed 00:10:53.471 Test: blockdev nvme passthru rw ...[2024-12-06 18:09:04.036482] bdevio.c: 727:blockdev_comparev_and_writev: *ERROR*: skipping comparev_and_writev on bdev Nvme0n1 since it has 00:10:53.471 separate metadata which is not supported yet. 00:10:53.471 passed 00:10:53.471 Test: blockdev nvme passthru vendor specific ...[2024-12-06 18:09:04.037139] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:191 PRP1 0x0 PRP2 0x0 00:10:53.471 [2024-12-06 18:09:04.037184] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:191 cdw0:0 sqhd:0017 p:1 m:0 dnr:1 00:10:53.471 passed 00:10:53.471 Test: blockdev nvme admin passthru ...passed 00:10:53.742 Test: blockdev copy ...passed 00:10:53.742 00:10:53.742 Run Summary: Type Total Ran Passed Failed Inactive 00:10:53.742 suites 7 7 n/a 0 0 00:10:53.742 tests 161 161 161 0 0 00:10:53.742 asserts 1025 1025 1025 0 n/a 00:10:53.742 00:10:53.742 Elapsed time = 2.021 seconds 00:10:53.742 0 00:10:53.742 18:09:04 blockdev_nvme_gpt.bdev_bounds -- bdev/blockdev.sh@294 -- # killprocess 62614 00:10:53.742 18:09:04 blockdev_nvme_gpt.bdev_bounds -- common/autotest_common.sh@954 -- # '[' -z 62614 ']' 00:10:53.742 18:09:04 blockdev_nvme_gpt.bdev_bounds -- common/autotest_common.sh@958 -- # kill -0 62614 00:10:53.742 18:09:04 blockdev_nvme_gpt.bdev_bounds -- common/autotest_common.sh@959 -- # uname 00:10:53.742 18:09:04 blockdev_nvme_gpt.bdev_bounds -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:10:53.742 18:09:04 blockdev_nvme_gpt.bdev_bounds -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 62614 00:10:53.742 killing process with pid 62614 00:10:53.742 18:09:04 blockdev_nvme_gpt.bdev_bounds -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:10:53.742 18:09:04 blockdev_nvme_gpt.bdev_bounds -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:10:53.742 18:09:04 blockdev_nvme_gpt.bdev_bounds -- common/autotest_common.sh@972 -- # echo 'killing process with pid 62614' 00:10:53.742 18:09:04 blockdev_nvme_gpt.bdev_bounds -- common/autotest_common.sh@973 -- # kill 62614 00:10:53.742 18:09:04 blockdev_nvme_gpt.bdev_bounds -- common/autotest_common.sh@978 -- # wait 62614 00:10:54.675 ************************************ 00:10:54.675 END TEST bdev_bounds 00:10:54.675 ************************************ 00:10:54.675 18:09:05 blockdev_nvme_gpt.bdev_bounds -- bdev/blockdev.sh@295 -- # trap - SIGINT SIGTERM EXIT 00:10:54.675 00:10:54.675 real 0m3.094s 00:10:54.675 user 0m7.976s 00:10:54.675 sys 0m0.431s 00:10:54.675 18:09:05 blockdev_nvme_gpt.bdev_bounds -- common/autotest_common.sh@1130 -- # xtrace_disable 00:10:54.675 18:09:05 blockdev_nvme_gpt.bdev_bounds -- common/autotest_common.sh@10 -- # set +x 00:10:54.933 18:09:05 blockdev_nvme_gpt -- bdev/blockdev.sh@799 -- # run_test bdev_nbd nbd_function_test /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json 'Nvme0n1 Nvme1n1p1 Nvme1n1p2 Nvme2n1 Nvme2n2 Nvme2n3 Nvme3n1' '' 00:10:54.933 18:09:05 blockdev_nvme_gpt -- common/autotest_common.sh@1105 -- # '[' 5 -le 1 ']' 00:10:54.933 18:09:05 blockdev_nvme_gpt -- common/autotest_common.sh@1111 -- # xtrace_disable 00:10:54.933 18:09:05 blockdev_nvme_gpt -- common/autotest_common.sh@10 -- # set +x 00:10:54.933 ************************************ 00:10:54.933 START TEST bdev_nbd 00:10:54.933 ************************************ 00:10:54.933 18:09:05 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@1129 -- # nbd_function_test /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json 'Nvme0n1 Nvme1n1p1 Nvme1n1p2 Nvme2n1 Nvme2n2 Nvme2n3 Nvme3n1' '' 00:10:54.933 18:09:05 blockdev_nvme_gpt.bdev_nbd -- bdev/blockdev.sh@299 -- # uname -s 00:10:54.933 18:09:05 blockdev_nvme_gpt.bdev_nbd -- bdev/blockdev.sh@299 -- # [[ Linux == Linux ]] 00:10:54.933 18:09:05 blockdev_nvme_gpt.bdev_nbd -- bdev/blockdev.sh@301 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:10:54.933 18:09:05 blockdev_nvme_gpt.bdev_nbd -- bdev/blockdev.sh@302 -- # local conf=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.json 00:10:54.933 18:09:05 blockdev_nvme_gpt.bdev_nbd -- bdev/blockdev.sh@303 -- # bdev_all=('Nvme0n1' 'Nvme1n1p1' 'Nvme1n1p2' 'Nvme2n1' 'Nvme2n2' 'Nvme2n3' 'Nvme3n1') 00:10:54.933 18:09:05 blockdev_nvme_gpt.bdev_nbd -- bdev/blockdev.sh@303 -- # local bdev_all 00:10:54.933 18:09:05 blockdev_nvme_gpt.bdev_nbd -- bdev/blockdev.sh@304 -- # local bdev_num=7 00:10:54.933 18:09:05 blockdev_nvme_gpt.bdev_nbd -- bdev/blockdev.sh@308 -- # [[ -e /sys/module/nbd ]] 00:10:54.933 18:09:05 blockdev_nvme_gpt.bdev_nbd -- bdev/blockdev.sh@310 -- # nbd_all=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13' '/dev/nbd14' '/dev/nbd15' '/dev/nbd2' '/dev/nbd3' '/dev/nbd4' '/dev/nbd5' '/dev/nbd6' '/dev/nbd7' '/dev/nbd8' '/dev/nbd9') 00:10:54.933 18:09:05 blockdev_nvme_gpt.bdev_nbd -- bdev/blockdev.sh@310 -- # local nbd_all 00:10:54.933 18:09:05 blockdev_nvme_gpt.bdev_nbd -- bdev/blockdev.sh@311 -- # bdev_num=7 00:10:54.933 18:09:05 blockdev_nvme_gpt.bdev_nbd -- bdev/blockdev.sh@313 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13' '/dev/nbd14') 00:10:54.933 18:09:05 blockdev_nvme_gpt.bdev_nbd -- bdev/blockdev.sh@313 -- # local nbd_list 00:10:54.933 18:09:05 blockdev_nvme_gpt.bdev_nbd -- bdev/blockdev.sh@314 -- # bdev_list=('Nvme0n1' 'Nvme1n1p1' 'Nvme1n1p2' 'Nvme2n1' 'Nvme2n2' 'Nvme2n3' 'Nvme3n1') 00:10:54.933 18:09:05 blockdev_nvme_gpt.bdev_nbd -- bdev/blockdev.sh@314 -- # local bdev_list 00:10:54.933 18:09:05 blockdev_nvme_gpt.bdev_nbd -- bdev/blockdev.sh@317 -- # nbd_pid=62679 00:10:54.933 18:09:05 blockdev_nvme_gpt.bdev_nbd -- bdev/blockdev.sh@318 -- # trap 'cleanup; killprocess $nbd_pid' SIGINT SIGTERM EXIT 00:10:54.933 18:09:05 blockdev_nvme_gpt.bdev_nbd -- bdev/blockdev.sh@316 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -r /var/tmp/spdk-nbd.sock -i 0 --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json '' 00:10:54.933 18:09:05 blockdev_nvme_gpt.bdev_nbd -- bdev/blockdev.sh@319 -- # waitforlisten 62679 /var/tmp/spdk-nbd.sock 00:10:54.933 18:09:05 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@835 -- # '[' -z 62679 ']' 00:10:54.933 18:09:05 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:10:54.933 18:09:05 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@840 -- # local max_retries=100 00:10:54.933 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:10:54.933 18:09:05 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:10:54.933 18:09:05 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@844 -- # xtrace_disable 00:10:54.933 18:09:05 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@10 -- # set +x 00:10:54.933 [2024-12-06 18:09:05.383565] Starting SPDK v25.01-pre git sha1 1148849d6 / DPDK 24.03.0 initialization... 00:10:54.933 [2024-12-06 18:09:05.383686] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:10:55.191 [2024-12-06 18:09:05.567855] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:10:55.191 [2024-12-06 18:09:05.689305] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:10:56.129 18:09:06 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:10:56.129 18:09:06 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@868 -- # return 0 00:10:56.129 18:09:06 blockdev_nvme_gpt.bdev_nbd -- bdev/blockdev.sh@321 -- # nbd_rpc_start_stop_verify /var/tmp/spdk-nbd.sock 'Nvme0n1 Nvme1n1p1 Nvme1n1p2 Nvme2n1 Nvme2n2 Nvme2n3 Nvme3n1' 00:10:56.129 18:09:06 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@113 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:10:56.129 18:09:06 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@114 -- # bdev_list=('Nvme0n1' 'Nvme1n1p1' 'Nvme1n1p2' 'Nvme2n1' 'Nvme2n2' 'Nvme2n3' 'Nvme3n1') 00:10:56.129 18:09:06 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@114 -- # local bdev_list 00:10:56.129 18:09:06 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@116 -- # nbd_start_disks_without_nbd_idx /var/tmp/spdk-nbd.sock 'Nvme0n1 Nvme1n1p1 Nvme1n1p2 Nvme2n1 Nvme2n2 Nvme2n3 Nvme3n1' 00:10:56.129 18:09:06 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@22 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:10:56.129 18:09:06 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@23 -- # bdev_list=('Nvme0n1' 'Nvme1n1p1' 'Nvme1n1p2' 'Nvme2n1' 'Nvme2n2' 'Nvme2n3' 'Nvme3n1') 00:10:56.129 18:09:06 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@23 -- # local bdev_list 00:10:56.129 18:09:06 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@24 -- # local i 00:10:56.129 18:09:06 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@25 -- # local nbd_device 00:10:56.129 18:09:06 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i = 0 )) 00:10:56.129 18:09:06 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 7 )) 00:10:56.129 18:09:06 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme0n1 00:10:56.129 18:09:06 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd0 00:10:56.129 18:09:06 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@30 -- # basename /dev/nbd0 00:10:56.129 18:09:06 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@30 -- # waitfornbd nbd0 00:10:56.129 18:09:06 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@872 -- # local nbd_name=nbd0 00:10:56.129 18:09:06 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@873 -- # local i 00:10:56.129 18:09:06 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:10:56.129 18:09:06 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:10:56.129 18:09:06 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@876 -- # grep -q -w nbd0 /proc/partitions 00:10:56.129 18:09:06 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@877 -- # break 00:10:56.129 18:09:06 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:10:56.129 18:09:06 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:10:56.129 18:09:06 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@889 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:10:56.129 1+0 records in 00:10:56.129 1+0 records out 00:10:56.129 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000483412 s, 8.5 MB/s 00:10:56.129 18:09:06 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:10:56.129 18:09:06 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@890 -- # size=4096 00:10:56.129 18:09:06 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:10:56.129 18:09:06 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:10:56.129 18:09:06 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@893 -- # return 0 00:10:56.129 18:09:06 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:10:56.129 18:09:06 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 7 )) 00:10:56.129 18:09:06 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme1n1p1 00:10:56.705 18:09:06 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd1 00:10:56.705 18:09:06 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@30 -- # basename /dev/nbd1 00:10:56.705 18:09:06 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@30 -- # waitfornbd nbd1 00:10:56.705 18:09:06 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@872 -- # local nbd_name=nbd1 00:10:56.705 18:09:06 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@873 -- # local i 00:10:56.705 18:09:06 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:10:56.705 18:09:06 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:10:56.705 18:09:06 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@876 -- # grep -q -w nbd1 /proc/partitions 00:10:56.705 18:09:07 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@877 -- # break 00:10:56.705 18:09:07 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:10:56.705 18:09:07 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:10:56.705 18:09:07 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@889 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:10:56.705 1+0 records in 00:10:56.705 1+0 records out 00:10:56.705 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000708845 s, 5.8 MB/s 00:10:56.705 18:09:07 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:10:56.705 18:09:07 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@890 -- # size=4096 00:10:56.705 18:09:07 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:10:56.705 18:09:07 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:10:56.705 18:09:07 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@893 -- # return 0 00:10:56.705 18:09:07 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:10:56.705 18:09:07 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 7 )) 00:10:56.705 18:09:07 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme1n1p2 00:10:56.705 18:09:07 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd2 00:10:56.964 18:09:07 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@30 -- # basename /dev/nbd2 00:10:56.964 18:09:07 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@30 -- # waitfornbd nbd2 00:10:56.964 18:09:07 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@872 -- # local nbd_name=nbd2 00:10:56.964 18:09:07 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@873 -- # local i 00:10:56.964 18:09:07 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:10:56.964 18:09:07 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:10:56.964 18:09:07 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@876 -- # grep -q -w nbd2 /proc/partitions 00:10:56.964 18:09:07 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@877 -- # break 00:10:56.964 18:09:07 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:10:56.964 18:09:07 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:10:56.964 18:09:07 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@889 -- # dd if=/dev/nbd2 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:10:56.964 1+0 records in 00:10:56.964 1+0 records out 00:10:56.964 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000556937 s, 7.4 MB/s 00:10:56.964 18:09:07 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:10:56.964 18:09:07 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@890 -- # size=4096 00:10:56.964 18:09:07 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:10:56.964 18:09:07 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:10:56.964 18:09:07 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@893 -- # return 0 00:10:56.964 18:09:07 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:10:56.964 18:09:07 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 7 )) 00:10:56.964 18:09:07 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme2n1 00:10:57.223 18:09:07 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd3 00:10:57.223 18:09:07 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@30 -- # basename /dev/nbd3 00:10:57.223 18:09:07 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@30 -- # waitfornbd nbd3 00:10:57.223 18:09:07 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@872 -- # local nbd_name=nbd3 00:10:57.223 18:09:07 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@873 -- # local i 00:10:57.223 18:09:07 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:10:57.223 18:09:07 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:10:57.223 18:09:07 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@876 -- # grep -q -w nbd3 /proc/partitions 00:10:57.223 18:09:07 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@877 -- # break 00:10:57.223 18:09:07 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:10:57.223 18:09:07 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:10:57.223 18:09:07 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@889 -- # dd if=/dev/nbd3 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:10:57.223 1+0 records in 00:10:57.223 1+0 records out 00:10:57.223 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000514173 s, 8.0 MB/s 00:10:57.223 18:09:07 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:10:57.223 18:09:07 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@890 -- # size=4096 00:10:57.223 18:09:07 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:10:57.223 18:09:07 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:10:57.223 18:09:07 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@893 -- # return 0 00:10:57.223 18:09:07 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:10:57.223 18:09:07 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 7 )) 00:10:57.223 18:09:07 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme2n2 00:10:57.482 18:09:07 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd4 00:10:57.482 18:09:07 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@30 -- # basename /dev/nbd4 00:10:57.482 18:09:07 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@30 -- # waitfornbd nbd4 00:10:57.482 18:09:07 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@872 -- # local nbd_name=nbd4 00:10:57.482 18:09:07 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@873 -- # local i 00:10:57.482 18:09:07 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:10:57.482 18:09:07 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:10:57.482 18:09:07 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@876 -- # grep -q -w nbd4 /proc/partitions 00:10:57.482 18:09:07 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@877 -- # break 00:10:57.482 18:09:07 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:10:57.482 18:09:07 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:10:57.483 18:09:07 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@889 -- # dd if=/dev/nbd4 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:10:57.483 1+0 records in 00:10:57.483 1+0 records out 00:10:57.483 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000981211 s, 4.2 MB/s 00:10:57.483 18:09:07 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:10:57.483 18:09:07 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@890 -- # size=4096 00:10:57.483 18:09:07 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:10:57.483 18:09:07 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:10:57.483 18:09:07 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@893 -- # return 0 00:10:57.483 18:09:07 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:10:57.483 18:09:07 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 7 )) 00:10:57.483 18:09:07 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme2n3 00:10:57.742 18:09:08 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd5 00:10:57.742 18:09:08 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@30 -- # basename /dev/nbd5 00:10:57.742 18:09:08 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@30 -- # waitfornbd nbd5 00:10:57.742 18:09:08 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@872 -- # local nbd_name=nbd5 00:10:57.742 18:09:08 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@873 -- # local i 00:10:57.742 18:09:08 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:10:57.742 18:09:08 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:10:57.742 18:09:08 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@876 -- # grep -q -w nbd5 /proc/partitions 00:10:57.742 18:09:08 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@877 -- # break 00:10:57.742 18:09:08 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:10:57.742 18:09:08 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:10:57.742 18:09:08 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@889 -- # dd if=/dev/nbd5 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:10:57.742 1+0 records in 00:10:57.742 1+0 records out 00:10:57.742 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000794741 s, 5.2 MB/s 00:10:57.742 18:09:08 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:10:57.742 18:09:08 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@890 -- # size=4096 00:10:57.742 18:09:08 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:10:57.742 18:09:08 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:10:57.742 18:09:08 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@893 -- # return 0 00:10:57.742 18:09:08 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:10:57.742 18:09:08 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 7 )) 00:10:57.742 18:09:08 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme3n1 00:10:58.001 18:09:08 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd6 00:10:58.001 18:09:08 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@30 -- # basename /dev/nbd6 00:10:58.001 18:09:08 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@30 -- # waitfornbd nbd6 00:10:58.001 18:09:08 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@872 -- # local nbd_name=nbd6 00:10:58.001 18:09:08 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@873 -- # local i 00:10:58.001 18:09:08 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:10:58.001 18:09:08 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:10:58.001 18:09:08 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@876 -- # grep -q -w nbd6 /proc/partitions 00:10:58.001 18:09:08 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@877 -- # break 00:10:58.001 18:09:08 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:10:58.001 18:09:08 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:10:58.001 18:09:08 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@889 -- # dd if=/dev/nbd6 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:10:58.001 1+0 records in 00:10:58.001 1+0 records out 00:10:58.001 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.00103342 s, 4.0 MB/s 00:10:58.001 18:09:08 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:10:58.001 18:09:08 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@890 -- # size=4096 00:10:58.001 18:09:08 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:10:58.001 18:09:08 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:10:58.001 18:09:08 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@893 -- # return 0 00:10:58.001 18:09:08 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:10:58.001 18:09:08 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 7 )) 00:10:58.001 18:09:08 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@118 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:10:58.001 18:09:08 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@118 -- # nbd_disks_json='[ 00:10:58.001 { 00:10:58.001 "nbd_device": "/dev/nbd0", 00:10:58.001 "bdev_name": "Nvme0n1" 00:10:58.001 }, 00:10:58.001 { 00:10:58.001 "nbd_device": "/dev/nbd1", 00:10:58.001 "bdev_name": "Nvme1n1p1" 00:10:58.001 }, 00:10:58.001 { 00:10:58.001 "nbd_device": "/dev/nbd2", 00:10:58.001 "bdev_name": "Nvme1n1p2" 00:10:58.001 }, 00:10:58.001 { 00:10:58.001 "nbd_device": "/dev/nbd3", 00:10:58.001 "bdev_name": "Nvme2n1" 00:10:58.001 }, 00:10:58.001 { 00:10:58.002 "nbd_device": "/dev/nbd4", 00:10:58.002 "bdev_name": "Nvme2n2" 00:10:58.002 }, 00:10:58.002 { 00:10:58.002 "nbd_device": "/dev/nbd5", 00:10:58.002 "bdev_name": "Nvme2n3" 00:10:58.002 }, 00:10:58.002 { 00:10:58.002 "nbd_device": "/dev/nbd6", 00:10:58.002 "bdev_name": "Nvme3n1" 00:10:58.002 } 00:10:58.002 ]' 00:10:58.002 18:09:08 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@119 -- # nbd_disks_name=($(echo "${nbd_disks_json}" | jq -r '.[] | .nbd_device')) 00:10:58.002 18:09:08 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@119 -- # echo '[ 00:10:58.002 { 00:10:58.002 "nbd_device": "/dev/nbd0", 00:10:58.002 "bdev_name": "Nvme0n1" 00:10:58.002 }, 00:10:58.002 { 00:10:58.002 "nbd_device": "/dev/nbd1", 00:10:58.002 "bdev_name": "Nvme1n1p1" 00:10:58.002 }, 00:10:58.002 { 00:10:58.002 "nbd_device": "/dev/nbd2", 00:10:58.002 "bdev_name": "Nvme1n1p2" 00:10:58.002 }, 00:10:58.002 { 00:10:58.002 "nbd_device": "/dev/nbd3", 00:10:58.002 "bdev_name": "Nvme2n1" 00:10:58.002 }, 00:10:58.002 { 00:10:58.002 "nbd_device": "/dev/nbd4", 00:10:58.002 "bdev_name": "Nvme2n2" 00:10:58.002 }, 00:10:58.002 { 00:10:58.002 "nbd_device": "/dev/nbd5", 00:10:58.002 "bdev_name": "Nvme2n3" 00:10:58.002 }, 00:10:58.002 { 00:10:58.002 "nbd_device": "/dev/nbd6", 00:10:58.002 "bdev_name": "Nvme3n1" 00:10:58.002 } 00:10:58.002 ]' 00:10:58.002 18:09:08 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@119 -- # jq -r '.[] | .nbd_device' 00:10:58.268 18:09:08 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@120 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1 /dev/nbd2 /dev/nbd3 /dev/nbd4 /dev/nbd5 /dev/nbd6' 00:10:58.269 18:09:08 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:10:58.269 18:09:08 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd2' '/dev/nbd3' '/dev/nbd4' '/dev/nbd5' '/dev/nbd6') 00:10:58.269 18:09:08 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@50 -- # local nbd_list 00:10:58.269 18:09:08 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@51 -- # local i 00:10:58.269 18:09:08 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:10:58.269 18:09:08 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:10:58.533 18:09:08 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:10:58.533 18:09:08 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:10:58.533 18:09:08 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:10:58.533 18:09:08 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:10:58.533 18:09:08 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:10:58.533 18:09:08 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:10:58.533 18:09:08 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:10:58.533 18:09:08 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:10:58.533 18:09:08 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:10:58.534 18:09:08 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:10:58.534 18:09:09 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:10:58.534 18:09:09 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:10:58.534 18:09:09 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:10:58.534 18:09:09 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:10:58.534 18:09:09 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:10:58.534 18:09:09 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:10:58.534 18:09:09 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:10:58.534 18:09:09 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:10:58.534 18:09:09 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:10:58.534 18:09:09 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd2 00:10:58.793 18:09:09 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd2 00:10:58.793 18:09:09 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd2 00:10:58.793 18:09:09 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd2 00:10:58.793 18:09:09 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:10:58.793 18:09:09 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:10:58.793 18:09:09 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd2 /proc/partitions 00:10:58.793 18:09:09 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:10:58.793 18:09:09 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:10:58.793 18:09:09 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:10:58.793 18:09:09 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd3 00:10:59.051 18:09:09 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd3 00:10:59.051 18:09:09 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd3 00:10:59.051 18:09:09 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd3 00:10:59.051 18:09:09 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:10:59.051 18:09:09 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:10:59.051 18:09:09 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd3 /proc/partitions 00:10:59.051 18:09:09 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:10:59.051 18:09:09 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:10:59.051 18:09:09 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:10:59.051 18:09:09 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd4 00:10:59.310 18:09:09 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd4 00:10:59.310 18:09:09 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd4 00:10:59.310 18:09:09 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd4 00:10:59.310 18:09:09 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:10:59.310 18:09:09 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:10:59.310 18:09:09 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd4 /proc/partitions 00:10:59.310 18:09:09 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:10:59.310 18:09:09 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:10:59.310 18:09:09 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:10:59.310 18:09:09 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd5 00:10:59.569 18:09:09 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd5 00:10:59.569 18:09:09 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd5 00:10:59.569 18:09:09 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd5 00:10:59.569 18:09:09 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:10:59.569 18:09:10 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:10:59.569 18:09:10 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd5 /proc/partitions 00:10:59.569 18:09:10 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:10:59.569 18:09:10 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:10:59.569 18:09:10 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:10:59.569 18:09:10 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd6 00:10:59.828 18:09:10 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd6 00:10:59.828 18:09:10 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd6 00:10:59.828 18:09:10 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd6 00:10:59.828 18:09:10 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:10:59.828 18:09:10 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:10:59.828 18:09:10 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd6 /proc/partitions 00:10:59.828 18:09:10 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:10:59.828 18:09:10 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:10:59.828 18:09:10 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@122 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:10:59.828 18:09:10 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:10:59.828 18:09:10 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:11:00.087 18:09:10 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:11:00.087 18:09:10 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@64 -- # echo '[]' 00:11:00.087 18:09:10 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:11:00.087 18:09:10 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:11:00.087 18:09:10 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@65 -- # echo '' 00:11:00.087 18:09:10 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:11:00.087 18:09:10 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@65 -- # true 00:11:00.087 18:09:10 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@65 -- # count=0 00:11:00.087 18:09:10 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@66 -- # echo 0 00:11:00.087 18:09:10 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@122 -- # count=0 00:11:00.087 18:09:10 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@123 -- # '[' 0 -ne 0 ']' 00:11:00.087 18:09:10 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@127 -- # return 0 00:11:00.087 18:09:10 blockdev_nvme_gpt.bdev_nbd -- bdev/blockdev.sh@322 -- # nbd_rpc_data_verify /var/tmp/spdk-nbd.sock 'Nvme0n1 Nvme1n1p1 Nvme1n1p2 Nvme2n1 Nvme2n2 Nvme2n3 Nvme3n1' '/dev/nbd0 /dev/nbd1 /dev/nbd10 /dev/nbd11 /dev/nbd12 /dev/nbd13 /dev/nbd14' 00:11:00.087 18:09:10 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@90 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:11:00.087 18:09:10 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@91 -- # bdev_list=('Nvme0n1' 'Nvme1n1p1' 'Nvme1n1p2' 'Nvme2n1' 'Nvme2n2' 'Nvme2n3' 'Nvme3n1') 00:11:00.087 18:09:10 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@91 -- # local bdev_list 00:11:00.087 18:09:10 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@92 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13' '/dev/nbd14') 00:11:00.087 18:09:10 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@92 -- # local nbd_list 00:11:00.087 18:09:10 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@94 -- # nbd_start_disks /var/tmp/spdk-nbd.sock 'Nvme0n1 Nvme1n1p1 Nvme1n1p2 Nvme2n1 Nvme2n2 Nvme2n3 Nvme3n1' '/dev/nbd0 /dev/nbd1 /dev/nbd10 /dev/nbd11 /dev/nbd12 /dev/nbd13 /dev/nbd14' 00:11:00.087 18:09:10 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:11:00.087 18:09:10 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@10 -- # bdev_list=('Nvme0n1' 'Nvme1n1p1' 'Nvme1n1p2' 'Nvme2n1' 'Nvme2n2' 'Nvme2n3' 'Nvme3n1') 00:11:00.087 18:09:10 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@10 -- # local bdev_list 00:11:00.087 18:09:10 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13' '/dev/nbd14') 00:11:00.087 18:09:10 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@11 -- # local nbd_list 00:11:00.087 18:09:10 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@12 -- # local i 00:11:00.087 18:09:10 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:11:00.087 18:09:10 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 7 )) 00:11:00.087 18:09:10 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme0n1 /dev/nbd0 00:11:00.347 /dev/nbd0 00:11:00.347 18:09:10 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:11:00.347 18:09:10 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:11:00.347 18:09:10 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@872 -- # local nbd_name=nbd0 00:11:00.347 18:09:10 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@873 -- # local i 00:11:00.347 18:09:10 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:11:00.347 18:09:10 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:11:00.347 18:09:10 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@876 -- # grep -q -w nbd0 /proc/partitions 00:11:00.347 18:09:10 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@877 -- # break 00:11:00.347 18:09:10 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:11:00.347 18:09:10 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:11:00.347 18:09:10 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@889 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:11:00.347 1+0 records in 00:11:00.347 1+0 records out 00:11:00.347 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000587622 s, 7.0 MB/s 00:11:00.347 18:09:10 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:11:00.347 18:09:10 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@890 -- # size=4096 00:11:00.347 18:09:10 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:11:00.347 18:09:10 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:11:00.347 18:09:10 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@893 -- # return 0 00:11:00.347 18:09:10 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:11:00.347 18:09:10 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 7 )) 00:11:00.347 18:09:10 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme1n1p1 /dev/nbd1 00:11:00.606 /dev/nbd1 00:11:00.606 18:09:11 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:11:00.606 18:09:11 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:11:00.606 18:09:11 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@872 -- # local nbd_name=nbd1 00:11:00.606 18:09:11 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@873 -- # local i 00:11:00.606 18:09:11 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:11:00.606 18:09:11 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:11:00.606 18:09:11 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@876 -- # grep -q -w nbd1 /proc/partitions 00:11:00.606 18:09:11 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@877 -- # break 00:11:00.606 18:09:11 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:11:00.606 18:09:11 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:11:00.607 18:09:11 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@889 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:11:00.607 1+0 records in 00:11:00.607 1+0 records out 00:11:00.607 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000676435 s, 6.1 MB/s 00:11:00.607 18:09:11 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:11:00.607 18:09:11 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@890 -- # size=4096 00:11:00.607 18:09:11 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:11:00.607 18:09:11 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:11:00.607 18:09:11 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@893 -- # return 0 00:11:00.607 18:09:11 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:11:00.607 18:09:11 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 7 )) 00:11:00.607 18:09:11 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme1n1p2 /dev/nbd10 00:11:00.866 /dev/nbd10 00:11:00.866 18:09:11 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@17 -- # basename /dev/nbd10 00:11:00.866 18:09:11 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@17 -- # waitfornbd nbd10 00:11:00.866 18:09:11 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@872 -- # local nbd_name=nbd10 00:11:00.866 18:09:11 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@873 -- # local i 00:11:00.866 18:09:11 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:11:00.866 18:09:11 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:11:00.866 18:09:11 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@876 -- # grep -q -w nbd10 /proc/partitions 00:11:00.866 18:09:11 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@877 -- # break 00:11:00.866 18:09:11 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:11:00.866 18:09:11 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:11:00.866 18:09:11 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@889 -- # dd if=/dev/nbd10 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:11:00.866 1+0 records in 00:11:00.866 1+0 records out 00:11:00.866 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000533419 s, 7.7 MB/s 00:11:00.866 18:09:11 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:11:00.866 18:09:11 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@890 -- # size=4096 00:11:00.866 18:09:11 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:11:00.866 18:09:11 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:11:00.866 18:09:11 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@893 -- # return 0 00:11:00.866 18:09:11 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:11:00.866 18:09:11 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 7 )) 00:11:00.866 18:09:11 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme2n1 /dev/nbd11 00:11:01.125 /dev/nbd11 00:11:01.125 18:09:11 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@17 -- # basename /dev/nbd11 00:11:01.125 18:09:11 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@17 -- # waitfornbd nbd11 00:11:01.125 18:09:11 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@872 -- # local nbd_name=nbd11 00:11:01.125 18:09:11 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@873 -- # local i 00:11:01.125 18:09:11 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:11:01.125 18:09:11 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:11:01.126 18:09:11 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@876 -- # grep -q -w nbd11 /proc/partitions 00:11:01.126 18:09:11 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@877 -- # break 00:11:01.126 18:09:11 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:11:01.126 18:09:11 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:11:01.126 18:09:11 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@889 -- # dd if=/dev/nbd11 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:11:01.126 1+0 records in 00:11:01.126 1+0 records out 00:11:01.126 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000627356 s, 6.5 MB/s 00:11:01.126 18:09:11 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:11:01.126 18:09:11 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@890 -- # size=4096 00:11:01.126 18:09:11 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:11:01.126 18:09:11 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:11:01.126 18:09:11 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@893 -- # return 0 00:11:01.126 18:09:11 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:11:01.126 18:09:11 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 7 )) 00:11:01.126 18:09:11 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme2n2 /dev/nbd12 00:11:01.384 /dev/nbd12 00:11:01.384 18:09:11 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@17 -- # basename /dev/nbd12 00:11:01.384 18:09:11 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@17 -- # waitfornbd nbd12 00:11:01.384 18:09:11 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@872 -- # local nbd_name=nbd12 00:11:01.384 18:09:11 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@873 -- # local i 00:11:01.384 18:09:11 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:11:01.384 18:09:11 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:11:01.384 18:09:11 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@876 -- # grep -q -w nbd12 /proc/partitions 00:11:01.384 18:09:11 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@877 -- # break 00:11:01.384 18:09:11 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:11:01.384 18:09:11 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:11:01.385 18:09:11 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@889 -- # dd if=/dev/nbd12 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:11:01.385 1+0 records in 00:11:01.385 1+0 records out 00:11:01.385 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000591399 s, 6.9 MB/s 00:11:01.385 18:09:11 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:11:01.385 18:09:11 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@890 -- # size=4096 00:11:01.385 18:09:11 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:11:01.385 18:09:11 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:11:01.385 18:09:11 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@893 -- # return 0 00:11:01.385 18:09:11 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:11:01.385 18:09:11 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 7 )) 00:11:01.385 18:09:11 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme2n3 /dev/nbd13 00:11:01.643 /dev/nbd13 00:11:01.643 18:09:12 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@17 -- # basename /dev/nbd13 00:11:01.643 18:09:12 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@17 -- # waitfornbd nbd13 00:11:01.643 18:09:12 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@872 -- # local nbd_name=nbd13 00:11:01.643 18:09:12 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@873 -- # local i 00:11:01.643 18:09:12 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:11:01.643 18:09:12 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:11:01.643 18:09:12 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@876 -- # grep -q -w nbd13 /proc/partitions 00:11:01.643 18:09:12 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@877 -- # break 00:11:01.643 18:09:12 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:11:01.643 18:09:12 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:11:01.643 18:09:12 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@889 -- # dd if=/dev/nbd13 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:11:01.643 1+0 records in 00:11:01.643 1+0 records out 00:11:01.643 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000713993 s, 5.7 MB/s 00:11:01.643 18:09:12 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:11:01.643 18:09:12 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@890 -- # size=4096 00:11:01.643 18:09:12 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:11:01.643 18:09:12 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:11:01.643 18:09:12 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@893 -- # return 0 00:11:01.643 18:09:12 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:11:01.643 18:09:12 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 7 )) 00:11:01.644 18:09:12 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme3n1 /dev/nbd14 00:11:01.902 /dev/nbd14 00:11:01.902 18:09:12 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@17 -- # basename /dev/nbd14 00:11:01.902 18:09:12 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@17 -- # waitfornbd nbd14 00:11:01.902 18:09:12 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@872 -- # local nbd_name=nbd14 00:11:01.902 18:09:12 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@873 -- # local i 00:11:01.902 18:09:12 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:11:01.902 18:09:12 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:11:01.902 18:09:12 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@876 -- # grep -q -w nbd14 /proc/partitions 00:11:01.902 18:09:12 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@877 -- # break 00:11:01.902 18:09:12 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:11:01.902 18:09:12 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:11:01.902 18:09:12 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@889 -- # dd if=/dev/nbd14 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:11:01.902 1+0 records in 00:11:01.902 1+0 records out 00:11:01.902 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000551594 s, 7.4 MB/s 00:11:01.902 18:09:12 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:11:01.902 18:09:12 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@890 -- # size=4096 00:11:01.902 18:09:12 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:11:01.902 18:09:12 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:11:01.902 18:09:12 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@893 -- # return 0 00:11:01.902 18:09:12 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:11:01.902 18:09:12 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 7 )) 00:11:01.902 18:09:12 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@95 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:11:01.902 18:09:12 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:11:01.902 18:09:12 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:11:02.161 18:09:12 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[ 00:11:02.161 { 00:11:02.161 "nbd_device": "/dev/nbd0", 00:11:02.161 "bdev_name": "Nvme0n1" 00:11:02.161 }, 00:11:02.161 { 00:11:02.161 "nbd_device": "/dev/nbd1", 00:11:02.161 "bdev_name": "Nvme1n1p1" 00:11:02.161 }, 00:11:02.161 { 00:11:02.161 "nbd_device": "/dev/nbd10", 00:11:02.161 "bdev_name": "Nvme1n1p2" 00:11:02.161 }, 00:11:02.161 { 00:11:02.161 "nbd_device": "/dev/nbd11", 00:11:02.161 "bdev_name": "Nvme2n1" 00:11:02.161 }, 00:11:02.161 { 00:11:02.161 "nbd_device": "/dev/nbd12", 00:11:02.161 "bdev_name": "Nvme2n2" 00:11:02.161 }, 00:11:02.161 { 00:11:02.161 "nbd_device": "/dev/nbd13", 00:11:02.161 "bdev_name": "Nvme2n3" 00:11:02.161 }, 00:11:02.161 { 00:11:02.161 "nbd_device": "/dev/nbd14", 00:11:02.161 "bdev_name": "Nvme3n1" 00:11:02.161 } 00:11:02.161 ]' 00:11:02.161 18:09:12 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@64 -- # echo '[ 00:11:02.161 { 00:11:02.161 "nbd_device": "/dev/nbd0", 00:11:02.161 "bdev_name": "Nvme0n1" 00:11:02.161 }, 00:11:02.161 { 00:11:02.161 "nbd_device": "/dev/nbd1", 00:11:02.161 "bdev_name": "Nvme1n1p1" 00:11:02.161 }, 00:11:02.161 { 00:11:02.161 "nbd_device": "/dev/nbd10", 00:11:02.161 "bdev_name": "Nvme1n1p2" 00:11:02.161 }, 00:11:02.161 { 00:11:02.161 "nbd_device": "/dev/nbd11", 00:11:02.161 "bdev_name": "Nvme2n1" 00:11:02.161 }, 00:11:02.161 { 00:11:02.161 "nbd_device": "/dev/nbd12", 00:11:02.161 "bdev_name": "Nvme2n2" 00:11:02.161 }, 00:11:02.161 { 00:11:02.161 "nbd_device": "/dev/nbd13", 00:11:02.161 "bdev_name": "Nvme2n3" 00:11:02.161 }, 00:11:02.161 { 00:11:02.161 "nbd_device": "/dev/nbd14", 00:11:02.161 "bdev_name": "Nvme3n1" 00:11:02.161 } 00:11:02.161 ]' 00:11:02.161 18:09:12 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:11:02.161 18:09:12 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@64 -- # nbd_disks_name='/dev/nbd0 00:11:02.161 /dev/nbd1 00:11:02.161 /dev/nbd10 00:11:02.161 /dev/nbd11 00:11:02.161 /dev/nbd12 00:11:02.161 /dev/nbd13 00:11:02.161 /dev/nbd14' 00:11:02.161 18:09:12 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@65 -- # echo '/dev/nbd0 00:11:02.161 /dev/nbd1 00:11:02.161 /dev/nbd10 00:11:02.161 /dev/nbd11 00:11:02.161 /dev/nbd12 00:11:02.161 /dev/nbd13 00:11:02.161 /dev/nbd14' 00:11:02.161 18:09:12 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:11:02.161 18:09:12 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@65 -- # count=7 00:11:02.161 18:09:12 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@66 -- # echo 7 00:11:02.161 18:09:12 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@95 -- # count=7 00:11:02.161 18:09:12 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@96 -- # '[' 7 -ne 7 ']' 00:11:02.161 18:09:12 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@100 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1 /dev/nbd10 /dev/nbd11 /dev/nbd12 /dev/nbd13 /dev/nbd14' write 00:11:02.161 18:09:12 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13' '/dev/nbd14') 00:11:02.161 18:09:12 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@70 -- # local nbd_list 00:11:02.161 18:09:12 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@71 -- # local operation=write 00:11:02.161 18:09:12 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest 00:11:02.161 18:09:12 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@74 -- # '[' write = write ']' 00:11:02.161 18:09:12 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@76 -- # dd if=/dev/urandom of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest bs=4096 count=256 00:11:02.161 256+0 records in 00:11:02.161 256+0 records out 00:11:02.161 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0128848 s, 81.4 MB/s 00:11:02.161 18:09:12 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:11:02.161 18:09:12 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest of=/dev/nbd0 bs=4096 count=256 oflag=direct 00:11:02.421 256+0 records in 00:11:02.421 256+0 records out 00:11:02.421 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.137168 s, 7.6 MB/s 00:11:02.421 18:09:12 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:11:02.421 18:09:12 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest of=/dev/nbd1 bs=4096 count=256 oflag=direct 00:11:02.771 256+0 records in 00:11:02.771 256+0 records out 00:11:02.771 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.144642 s, 7.2 MB/s 00:11:02.771 18:09:13 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:11:02.771 18:09:13 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest of=/dev/nbd10 bs=4096 count=256 oflag=direct 00:11:02.771 256+0 records in 00:11:02.771 256+0 records out 00:11:02.771 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.150791 s, 7.0 MB/s 00:11:02.771 18:09:13 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:11:02.771 18:09:13 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest of=/dev/nbd11 bs=4096 count=256 oflag=direct 00:11:02.771 256+0 records in 00:11:02.771 256+0 records out 00:11:02.771 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.149489 s, 7.0 MB/s 00:11:02.771 18:09:13 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:11:02.771 18:09:13 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest of=/dev/nbd12 bs=4096 count=256 oflag=direct 00:11:03.030 256+0 records in 00:11:03.030 256+0 records out 00:11:03.030 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.146746 s, 7.1 MB/s 00:11:03.030 18:09:13 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:11:03.030 18:09:13 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest of=/dev/nbd13 bs=4096 count=256 oflag=direct 00:11:03.290 256+0 records in 00:11:03.290 256+0 records out 00:11:03.290 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.14714 s, 7.1 MB/s 00:11:03.290 18:09:13 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:11:03.290 18:09:13 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest of=/dev/nbd14 bs=4096 count=256 oflag=direct 00:11:03.290 256+0 records in 00:11:03.290 256+0 records out 00:11:03.290 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.147637 s, 7.1 MB/s 00:11:03.290 18:09:13 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@101 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1 /dev/nbd10 /dev/nbd11 /dev/nbd12 /dev/nbd13 /dev/nbd14' verify 00:11:03.290 18:09:13 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13' '/dev/nbd14') 00:11:03.290 18:09:13 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@70 -- # local nbd_list 00:11:03.290 18:09:13 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@71 -- # local operation=verify 00:11:03.290 18:09:13 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest 00:11:03.290 18:09:13 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@74 -- # '[' verify = write ']' 00:11:03.290 18:09:13 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@80 -- # '[' verify = verify ']' 00:11:03.290 18:09:13 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:11:03.290 18:09:13 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest /dev/nbd0 00:11:03.290 18:09:13 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:11:03.290 18:09:13 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest /dev/nbd1 00:11:03.290 18:09:13 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:11:03.290 18:09:13 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest /dev/nbd10 00:11:03.290 18:09:13 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:11:03.290 18:09:13 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest /dev/nbd11 00:11:03.290 18:09:13 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:11:03.290 18:09:13 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest /dev/nbd12 00:11:03.290 18:09:13 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:11:03.290 18:09:13 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest /dev/nbd13 00:11:03.290 18:09:13 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:11:03.290 18:09:13 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest /dev/nbd14 00:11:03.549 18:09:13 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@85 -- # rm /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest 00:11:03.549 18:09:13 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@103 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1 /dev/nbd10 /dev/nbd11 /dev/nbd12 /dev/nbd13 /dev/nbd14' 00:11:03.549 18:09:13 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:11:03.549 18:09:13 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13' '/dev/nbd14') 00:11:03.549 18:09:13 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@50 -- # local nbd_list 00:11:03.549 18:09:13 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@51 -- # local i 00:11:03.549 18:09:13 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:11:03.550 18:09:13 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:11:03.550 18:09:14 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:11:03.550 18:09:14 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:11:03.550 18:09:14 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:11:03.550 18:09:14 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:11:03.550 18:09:14 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:11:03.550 18:09:14 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:11:03.550 18:09:14 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:11:03.550 18:09:14 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:11:03.550 18:09:14 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:11:03.550 18:09:14 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:11:03.809 18:09:14 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:11:03.809 18:09:14 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:11:03.809 18:09:14 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:11:03.809 18:09:14 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:11:03.809 18:09:14 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:11:03.809 18:09:14 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:11:03.809 18:09:14 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:11:03.809 18:09:14 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:11:03.809 18:09:14 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:11:03.809 18:09:14 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd10 00:11:04.067 18:09:14 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd10 00:11:04.067 18:09:14 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd10 00:11:04.067 18:09:14 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd10 00:11:04.067 18:09:14 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:11:04.067 18:09:14 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:11:04.067 18:09:14 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd10 /proc/partitions 00:11:04.067 18:09:14 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:11:04.067 18:09:14 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:11:04.067 18:09:14 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:11:04.067 18:09:14 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd11 00:11:04.326 18:09:14 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd11 00:11:04.326 18:09:14 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd11 00:11:04.326 18:09:14 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd11 00:11:04.326 18:09:14 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:11:04.326 18:09:14 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:11:04.326 18:09:14 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd11 /proc/partitions 00:11:04.326 18:09:14 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:11:04.326 18:09:14 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:11:04.326 18:09:14 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:11:04.326 18:09:14 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd12 00:11:04.585 18:09:14 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd12 00:11:04.585 18:09:14 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd12 00:11:04.585 18:09:14 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd12 00:11:04.585 18:09:14 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:11:04.585 18:09:14 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:11:04.585 18:09:14 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd12 /proc/partitions 00:11:04.585 18:09:14 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:11:04.585 18:09:15 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:11:04.585 18:09:15 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:11:04.585 18:09:15 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd13 00:11:04.843 18:09:15 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd13 00:11:04.843 18:09:15 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd13 00:11:04.843 18:09:15 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd13 00:11:04.843 18:09:15 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:11:04.843 18:09:15 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:11:04.843 18:09:15 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd13 /proc/partitions 00:11:04.843 18:09:15 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:11:04.843 18:09:15 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:11:04.843 18:09:15 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:11:04.843 18:09:15 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd14 00:11:05.101 18:09:15 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd14 00:11:05.101 18:09:15 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd14 00:11:05.101 18:09:15 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd14 00:11:05.101 18:09:15 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:11:05.101 18:09:15 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:11:05.101 18:09:15 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd14 /proc/partitions 00:11:05.101 18:09:15 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:11:05.101 18:09:15 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:11:05.101 18:09:15 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@104 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:11:05.101 18:09:15 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:11:05.101 18:09:15 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:11:05.360 18:09:15 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:11:05.360 18:09:15 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:11:05.360 18:09:15 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@64 -- # echo '[]' 00:11:05.360 18:09:15 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:11:05.360 18:09:15 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@65 -- # echo '' 00:11:05.360 18:09:15 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:11:05.360 18:09:15 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@65 -- # true 00:11:05.360 18:09:15 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@65 -- # count=0 00:11:05.360 18:09:15 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@66 -- # echo 0 00:11:05.360 18:09:15 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@104 -- # count=0 00:11:05.360 18:09:15 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@105 -- # '[' 0 -ne 0 ']' 00:11:05.360 18:09:15 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@109 -- # return 0 00:11:05.360 18:09:15 blockdev_nvme_gpt.bdev_nbd -- bdev/blockdev.sh@323 -- # nbd_with_lvol_verify /var/tmp/spdk-nbd.sock /dev/nbd0 00:11:05.360 18:09:15 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@131 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:11:05.360 18:09:15 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@132 -- # local nbd=/dev/nbd0 00:11:05.360 18:09:15 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@134 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create -b malloc_lvol_verify 16 512 00:11:05.618 malloc_lvol_verify 00:11:05.618 18:09:15 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@135 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_lvol_create_lvstore malloc_lvol_verify lvs 00:11:05.878 b4f8f8b5-00cd-4808-9c52-1325cb6d32cd 00:11:05.878 18:09:16 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@136 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_lvol_create lvol 4 -l lvs 00:11:05.878 68ceeb4c-06c6-4260-99e7-ee2d9b206b4f 00:11:06.138 18:09:16 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@137 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk lvs/lvol /dev/nbd0 00:11:06.138 /dev/nbd0 00:11:06.138 18:09:16 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@139 -- # wait_for_nbd_set_capacity /dev/nbd0 00:11:06.138 18:09:16 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@146 -- # local nbd=nbd0 00:11:06.138 18:09:16 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@148 -- # [[ -e /sys/block/nbd0/size ]] 00:11:06.138 18:09:16 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@150 -- # (( 8192 == 0 )) 00:11:06.138 18:09:16 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@141 -- # mkfs.ext4 /dev/nbd0 00:11:06.138 mke2fs 1.47.0 (5-Feb-2023) 00:11:06.138 Discarding device blocks: 0/4096 done 00:11:06.138 Creating filesystem with 4096 1k blocks and 1024 inodes 00:11:06.138 00:11:06.138 Allocating group tables: 0/1 done 00:11:06.138 Writing inode tables: 0/1 done 00:11:06.397 Creating journal (1024 blocks): done 00:11:06.397 Writing superblocks and filesystem accounting information: 0/1 done 00:11:06.397 00:11:06.397 18:09:16 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@142 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock /dev/nbd0 00:11:06.397 18:09:16 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:11:06.397 18:09:16 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0') 00:11:06.397 18:09:16 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@50 -- # local nbd_list 00:11:06.397 18:09:16 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@51 -- # local i 00:11:06.397 18:09:16 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:11:06.397 18:09:16 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:11:06.397 18:09:16 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:11:06.397 18:09:16 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:11:06.397 18:09:16 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:11:06.397 18:09:16 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:11:06.397 18:09:16 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:11:06.397 18:09:16 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:11:06.397 18:09:16 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:11:06.397 18:09:16 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:11:06.397 18:09:16 blockdev_nvme_gpt.bdev_nbd -- bdev/blockdev.sh@325 -- # killprocess 62679 00:11:06.397 18:09:16 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@954 -- # '[' -z 62679 ']' 00:11:06.397 18:09:16 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@958 -- # kill -0 62679 00:11:06.397 18:09:16 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@959 -- # uname 00:11:06.397 18:09:16 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:11:06.397 18:09:16 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 62679 00:11:06.655 18:09:17 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:11:06.655 18:09:17 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:11:06.655 killing process with pid 62679 00:11:06.655 18:09:17 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@972 -- # echo 'killing process with pid 62679' 00:11:06.655 18:09:17 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@973 -- # kill 62679 00:11:06.655 18:09:17 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@978 -- # wait 62679 00:11:08.028 18:09:18 blockdev_nvme_gpt.bdev_nbd -- bdev/blockdev.sh@326 -- # trap - SIGINT SIGTERM EXIT 00:11:08.028 00:11:08.028 real 0m12.953s 00:11:08.028 user 0m16.704s 00:11:08.028 sys 0m5.472s 00:11:08.028 18:09:18 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@1130 -- # xtrace_disable 00:11:08.028 18:09:18 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@10 -- # set +x 00:11:08.028 ************************************ 00:11:08.028 END TEST bdev_nbd 00:11:08.028 ************************************ 00:11:08.028 18:09:18 blockdev_nvme_gpt -- bdev/blockdev.sh@800 -- # [[ y == y ]] 00:11:08.028 18:09:18 blockdev_nvme_gpt -- bdev/blockdev.sh@801 -- # '[' gpt = nvme ']' 00:11:08.028 18:09:18 blockdev_nvme_gpt -- bdev/blockdev.sh@801 -- # '[' gpt = gpt ']' 00:11:08.028 skipping fio tests on NVMe due to multi-ns failures. 00:11:08.028 18:09:18 blockdev_nvme_gpt -- bdev/blockdev.sh@803 -- # echo 'skipping fio tests on NVMe due to multi-ns failures.' 00:11:08.028 18:09:18 blockdev_nvme_gpt -- bdev/blockdev.sh@812 -- # trap cleanup SIGINT SIGTERM EXIT 00:11:08.028 18:09:18 blockdev_nvme_gpt -- bdev/blockdev.sh@814 -- # run_test bdev_verify /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 4096 -w verify -t 5 -C -m 0x3 '' 00:11:08.028 18:09:18 blockdev_nvme_gpt -- common/autotest_common.sh@1105 -- # '[' 16 -le 1 ']' 00:11:08.028 18:09:18 blockdev_nvme_gpt -- common/autotest_common.sh@1111 -- # xtrace_disable 00:11:08.028 18:09:18 blockdev_nvme_gpt -- common/autotest_common.sh@10 -- # set +x 00:11:08.028 ************************************ 00:11:08.028 START TEST bdev_verify 00:11:08.028 ************************************ 00:11:08.028 18:09:18 blockdev_nvme_gpt.bdev_verify -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 4096 -w verify -t 5 -C -m 0x3 '' 00:11:08.028 [2024-12-06 18:09:18.389758] Starting SPDK v25.01-pre git sha1 1148849d6 / DPDK 24.03.0 initialization... 00:11:08.028 [2024-12-06 18:09:18.389879] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid63107 ] 00:11:08.028 [2024-12-06 18:09:18.569460] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:11:08.286 [2024-12-06 18:09:18.688440] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:11:08.286 [2024-12-06 18:09:18.688467] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:11:08.907 Running I/O for 5 seconds... 00:11:11.216 20800.00 IOPS, 81.25 MiB/s [2024-12-06T18:09:22.729Z] 20832.00 IOPS, 81.38 MiB/s [2024-12-06T18:09:23.666Z] 20757.33 IOPS, 81.08 MiB/s [2024-12-06T18:09:24.604Z] 21232.00 IOPS, 82.94 MiB/s [2024-12-06T18:09:24.604Z] 21030.40 IOPS, 82.15 MiB/s 00:11:14.028 Latency(us) 00:11:14.028 [2024-12-06T18:09:24.604Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:11:14.028 Job: Nvme0n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:11:14.028 Verification LBA range: start 0x0 length 0xbd0bd 00:11:14.028 Nvme0n1 : 5.04 1497.40 5.85 0.00 0.00 85147.04 19897.68 91803.04 00:11:14.028 Job: Nvme0n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:11:14.028 Verification LBA range: start 0xbd0bd length 0xbd0bd 00:11:14.028 Nvme0n1 : 5.04 1448.03 5.66 0.00 0.00 88053.78 21266.30 94329.73 00:11:14.028 Job: Nvme1n1p1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:11:14.028 Verification LBA range: start 0x0 length 0x4ff80 00:11:14.028 Nvme1n1p1 : 5.08 1500.08 5.86 0.00 0.00 84765.99 12475.53 82959.63 00:11:14.028 Job: Nvme1n1p1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:11:14.028 Verification LBA range: start 0x4ff80 length 0x4ff80 00:11:14.028 Nvme1n1p1 : 5.08 1450.22 5.66 0.00 0.00 87720.25 13896.79 88855.24 00:11:14.028 Job: Nvme1n1p2 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:11:14.028 Verification LBA range: start 0x0 length 0x4ff7f 00:11:14.028 Nvme1n1p2 : 5.08 1499.59 5.86 0.00 0.00 84593.65 12633.45 76642.90 00:11:14.028 Job: Nvme1n1p2 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:11:14.028 Verification LBA range: start 0x4ff7f length 0x4ff7f 00:11:14.029 Nvme1n1p2 : 5.09 1457.63 5.69 0.00 0.00 87276.71 15475.97 78748.48 00:11:14.029 Job: Nvme2n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:11:14.029 Verification LBA range: start 0x0 length 0x80000 00:11:14.029 Nvme2n1 : 5.09 1508.08 5.89 0.00 0.00 84231.17 11001.63 69905.07 00:11:14.029 Job: Nvme2n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:11:14.029 Verification LBA range: start 0x80000 length 0x80000 00:11:14.029 Nvme2n1 : 5.09 1457.16 5.69 0.00 0.00 87109.89 16212.92 76642.90 00:11:14.029 Job: Nvme2n2 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:11:14.029 Verification LBA range: start 0x0 length 0x80000 00:11:14.029 Nvme2n2 : 5.09 1507.73 5.89 0.00 0.00 84104.74 11212.18 65272.80 00:11:14.029 Job: Nvme2n2 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:11:14.029 Verification LBA range: start 0x80000 length 0x80000 00:11:14.029 Nvme2n2 : 5.10 1456.69 5.69 0.00 0.00 86972.53 16528.76 74958.44 00:11:14.029 Job: Nvme2n3 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:11:14.029 Verification LBA range: start 0x0 length 0x80000 00:11:14.029 Nvme2n3 : 5.10 1507.24 5.89 0.00 0.00 83980.26 11001.63 68220.61 00:11:14.029 Job: Nvme2n3 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:11:14.029 Verification LBA range: start 0x80000 length 0x80000 00:11:14.029 Nvme2n3 : 5.10 1456.22 5.69 0.00 0.00 86835.36 16528.76 77906.25 00:11:14.029 Job: Nvme3n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:11:14.029 Verification LBA range: start 0x0 length 0x20000 00:11:14.029 Nvme3n1 : 5.10 1506.74 5.89 0.00 0.00 83843.58 11054.27 69905.07 00:11:14.029 Job: Nvme3n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:11:14.029 Verification LBA range: start 0x20000 length 0x20000 00:11:14.029 Nvme3n1 : 5.10 1455.86 5.69 0.00 0.00 86690.05 15791.81 78748.48 00:11:14.029 [2024-12-06T18:09:24.605Z] =================================================================================================================== 00:11:14.029 [2024-12-06T18:09:24.605Z] Total : 20708.67 80.89 0.00 0.00 85782.91 11001.63 94329.73 00:11:15.929 ************************************ 00:11:15.929 END TEST bdev_verify 00:11:15.929 ************************************ 00:11:15.929 00:11:15.929 real 0m7.696s 00:11:15.929 user 0m14.229s 00:11:15.929 sys 0m0.311s 00:11:15.929 18:09:25 blockdev_nvme_gpt.bdev_verify -- common/autotest_common.sh@1130 -- # xtrace_disable 00:11:15.929 18:09:25 blockdev_nvme_gpt.bdev_verify -- common/autotest_common.sh@10 -- # set +x 00:11:15.929 18:09:26 blockdev_nvme_gpt -- bdev/blockdev.sh@815 -- # run_test bdev_verify_big_io /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 65536 -w verify -t 5 -C -m 0x3 '' 00:11:15.929 18:09:26 blockdev_nvme_gpt -- common/autotest_common.sh@1105 -- # '[' 16 -le 1 ']' 00:11:15.929 18:09:26 blockdev_nvme_gpt -- common/autotest_common.sh@1111 -- # xtrace_disable 00:11:15.929 18:09:26 blockdev_nvme_gpt -- common/autotest_common.sh@10 -- # set +x 00:11:15.929 ************************************ 00:11:15.929 START TEST bdev_verify_big_io 00:11:15.929 ************************************ 00:11:15.929 18:09:26 blockdev_nvme_gpt.bdev_verify_big_io -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 65536 -w verify -t 5 -C -m 0x3 '' 00:11:15.929 [2024-12-06 18:09:26.141773] Starting SPDK v25.01-pre git sha1 1148849d6 / DPDK 24.03.0 initialization... 00:11:15.929 [2024-12-06 18:09:26.141888] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid63210 ] 00:11:15.929 [2024-12-06 18:09:26.323038] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:11:15.929 [2024-12-06 18:09:26.432755] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:11:15.929 [2024-12-06 18:09:26.432785] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:11:16.866 Running I/O for 5 seconds... 00:11:21.532 2263.00 IOPS, 141.44 MiB/s [2024-12-06T18:09:33.132Z] 3069.50 IOPS, 191.84 MiB/s [2024-12-06T18:09:33.699Z] 3615.67 IOPS, 225.98 MiB/s 00:11:23.123 Latency(us) 00:11:23.123 [2024-12-06T18:09:33.699Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:11:23.123 Job: Nvme0n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 65536) 00:11:23.123 Verification LBA range: start 0x0 length 0xbd0b 00:11:23.123 Nvme0n1 : 5.66 143.67 8.98 0.00 0.00 855133.94 22634.92 875918.91 00:11:23.123 Job: Nvme0n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 65536) 00:11:23.123 Verification LBA range: start 0xbd0b length 0xbd0b 00:11:23.123 Nvme0n1 : 5.70 128.53 8.03 0.00 0.00 947785.49 32004.73 1273451.33 00:11:23.123 Job: Nvme1n1p1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 65536) 00:11:23.123 Verification LBA range: start 0x0 length 0x4ff8 00:11:23.123 Nvme1n1p1 : 5.66 152.62 9.54 0.00 0.00 800299.79 66115.03 869181.07 00:11:23.123 Job: Nvme1n1p1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 65536) 00:11:23.123 Verification LBA range: start 0x4ff8 length 0x4ff8 00:11:23.123 Nvme1n1p1 : 5.71 134.59 8.41 0.00 0.00 904694.82 88434.12 1111743.23 00:11:23.123 Job: Nvme1n1p2 (Core Mask 0x1, workload: verify, depth: 128, IO size: 65536) 00:11:23.123 Verification LBA range: start 0x0 length 0x4ff7 00:11:23.123 Nvme1n1p2 : 5.70 143.34 8.96 0.00 0.00 827947.12 65272.80 1300402.69 00:11:23.123 Job: Nvme1n1p2 (Core Mask 0x2, workload: verify, depth: 128, IO size: 65536) 00:11:23.123 Verification LBA range: start 0x4ff7 length 0x4ff7 00:11:23.123 Nvme1n1p2 : 5.71 134.49 8.41 0.00 0.00 874248.70 112858.78 997199.99 00:11:23.123 Job: Nvme2n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 65536) 00:11:23.123 Verification LBA range: start 0x0 length 0x8000 00:11:23.123 Nvme2n1 : 5.70 155.38 9.71 0.00 0.00 757974.66 37058.11 1071316.20 00:11:23.123 Job: Nvme2n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 65536) 00:11:23.123 Verification LBA range: start 0x8000 length 0x8000 00:11:23.123 Nvme2n1 : 5.77 143.57 8.97 0.00 0.00 784888.50 24635.22 869181.07 00:11:23.123 Job: Nvme2n2 (Core Mask 0x1, workload: verify, depth: 128, IO size: 65536) 00:11:23.123 Verification LBA range: start 0x0 length 0x8000 00:11:23.123 Nvme2n2 : 5.71 157.47 9.84 0.00 0.00 736061.70 38532.01 875918.91 00:11:23.123 Job: Nvme2n2 (Core Mask 0x2, workload: verify, depth: 128, IO size: 65536) 00:11:23.123 Verification LBA range: start 0x8000 length 0x8000 00:11:23.123 Nvme2n2 : 5.85 153.28 9.58 0.00 0.00 711732.94 28004.14 882656.75 00:11:23.123 Job: Nvme2n3 (Core Mask 0x1, workload: verify, depth: 128, IO size: 65536) 00:11:23.123 Verification LBA range: start 0x0 length 0x8000 00:11:23.123 Nvme2n3 : 5.71 162.14 10.13 0.00 0.00 705238.12 3790.03 882656.75 00:11:23.123 Job: Nvme2n3 (Core Mask 0x2, workload: verify, depth: 128, IO size: 65536) 00:11:23.123 Verification LBA range: start 0x8000 length 0x8000 00:11:23.123 Nvme2n3 : 5.97 193.04 12.06 0.00 0.00 547478.52 12791.36 815278.37 00:11:23.123 Job: Nvme3n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 65536) 00:11:23.123 Verification LBA range: start 0x0 length 0x2000 00:11:23.123 Nvme3n1 : 5.72 167.95 10.50 0.00 0.00 669548.57 5053.38 727686.48 00:11:23.123 Job: Nvme3n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 65536) 00:11:23.123 Verification LBA range: start 0x2000 length 0x2000 00:11:23.123 Nvme3n1 : 6.14 288.17 18.01 0.00 0.00 358118.59 1046.21 832122.96 00:11:23.123 [2024-12-06T18:09:33.699Z] =================================================================================================================== 00:11:23.123 [2024-12-06T18:09:33.700Z] Total : 2258.24 141.14 0.00 0.00 711446.78 1046.21 1300402.69 00:11:25.038 00:11:25.038 real 0m9.392s 00:11:25.038 user 0m17.604s 00:11:25.038 sys 0m0.327s 00:11:25.038 18:09:35 blockdev_nvme_gpt.bdev_verify_big_io -- common/autotest_common.sh@1130 -- # xtrace_disable 00:11:25.038 18:09:35 blockdev_nvme_gpt.bdev_verify_big_io -- common/autotest_common.sh@10 -- # set +x 00:11:25.038 ************************************ 00:11:25.038 END TEST bdev_verify_big_io 00:11:25.038 ************************************ 00:11:25.038 18:09:35 blockdev_nvme_gpt -- bdev/blockdev.sh@816 -- # run_test bdev_write_zeroes /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:11:25.038 18:09:35 blockdev_nvme_gpt -- common/autotest_common.sh@1105 -- # '[' 13 -le 1 ']' 00:11:25.038 18:09:35 blockdev_nvme_gpt -- common/autotest_common.sh@1111 -- # xtrace_disable 00:11:25.038 18:09:35 blockdev_nvme_gpt -- common/autotest_common.sh@10 -- # set +x 00:11:25.038 ************************************ 00:11:25.038 START TEST bdev_write_zeroes 00:11:25.038 ************************************ 00:11:25.038 18:09:35 blockdev_nvme_gpt.bdev_write_zeroes -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:11:25.296 [2024-12-06 18:09:35.618340] Starting SPDK v25.01-pre git sha1 1148849d6 / DPDK 24.03.0 initialization... 00:11:25.296 [2024-12-06 18:09:35.618455] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid63330 ] 00:11:25.296 [2024-12-06 18:09:35.793462] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:11:25.554 [2024-12-06 18:09:35.905671] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:11:26.121 Running I/O for 1 seconds... 00:11:27.314 63168.00 IOPS, 246.75 MiB/s 00:11:27.314 Latency(us) 00:11:27.314 [2024-12-06T18:09:37.890Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:11:27.314 Job: Nvme0n1 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:11:27.314 Nvme0n1 : 1.03 8978.21 35.07 0.00 0.00 14229.82 12107.05 34531.42 00:11:27.314 Job: Nvme1n1p1 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:11:27.314 Nvme1n1p1 : 1.03 8969.37 35.04 0.00 0.00 14223.78 12475.53 35373.65 00:11:27.314 Job: Nvme1n1p2 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:11:27.314 Nvme1n1p2 : 1.03 8960.64 35.00 0.00 0.00 14146.10 12001.77 31373.06 00:11:27.314 Job: Nvme2n1 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:11:27.314 Nvme2n1 : 1.03 8952.59 34.97 0.00 0.00 14111.97 12370.25 29478.04 00:11:27.314 Job: Nvme2n2 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:11:27.314 Nvme2n2 : 1.03 8944.52 34.94 0.00 0.00 14085.08 11843.86 28004.14 00:11:27.314 Job: Nvme2n3 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:11:27.314 Nvme2n3 : 1.03 8936.55 34.91 0.00 0.00 14051.65 10475.23 26003.84 00:11:27.314 Job: Nvme3n1 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:11:27.314 Nvme3n1 : 1.03 8928.60 34.88 0.00 0.00 14039.10 9159.25 25161.61 00:11:27.314 [2024-12-06T18:09:37.890Z] =================================================================================================================== 00:11:27.314 [2024-12-06T18:09:37.890Z] Total : 62670.48 244.81 0.00 0.00 14126.79 9159.25 35373.65 00:11:28.249 00:11:28.249 real 0m3.297s 00:11:28.249 user 0m2.914s 00:11:28.249 sys 0m0.266s 00:11:28.249 18:09:38 blockdev_nvme_gpt.bdev_write_zeroes -- common/autotest_common.sh@1130 -- # xtrace_disable 00:11:28.507 ************************************ 00:11:28.507 END TEST bdev_write_zeroes 00:11:28.507 ************************************ 00:11:28.507 18:09:38 blockdev_nvme_gpt.bdev_write_zeroes -- common/autotest_common.sh@10 -- # set +x 00:11:28.507 18:09:38 blockdev_nvme_gpt -- bdev/blockdev.sh@819 -- # run_test bdev_json_nonenclosed /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/nonenclosed.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:11:28.507 18:09:38 blockdev_nvme_gpt -- common/autotest_common.sh@1105 -- # '[' 13 -le 1 ']' 00:11:28.507 18:09:38 blockdev_nvme_gpt -- common/autotest_common.sh@1111 -- # xtrace_disable 00:11:28.507 18:09:38 blockdev_nvme_gpt -- common/autotest_common.sh@10 -- # set +x 00:11:28.507 ************************************ 00:11:28.507 START TEST bdev_json_nonenclosed 00:11:28.507 ************************************ 00:11:28.507 18:09:38 blockdev_nvme_gpt.bdev_json_nonenclosed -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/nonenclosed.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:11:28.507 [2024-12-06 18:09:38.986665] Starting SPDK v25.01-pre git sha1 1148849d6 / DPDK 24.03.0 initialization... 00:11:28.507 [2024-12-06 18:09:38.986918] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid63383 ] 00:11:28.766 [2024-12-06 18:09:39.168499] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:11:28.766 [2024-12-06 18:09:39.279298] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:11:28.766 [2024-12-06 18:09:39.279587] json_config.c: 608:json_config_prepare_ctx: *ERROR*: Invalid JSON configuration: not enclosed in {}. 00:11:28.766 [2024-12-06 18:09:39.279621] rpc.c: 190:spdk_rpc_server_finish: *ERROR*: No server listening on provided address: 00:11:28.766 [2024-12-06 18:09:39.279635] app.c:1064:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:11:29.025 00:11:29.025 real 0m0.643s 00:11:29.025 user 0m0.388s 00:11:29.025 sys 0m0.150s 00:11:29.025 18:09:39 blockdev_nvme_gpt.bdev_json_nonenclosed -- common/autotest_common.sh@1130 -- # xtrace_disable 00:11:29.025 ************************************ 00:11:29.025 END TEST bdev_json_nonenclosed 00:11:29.025 ************************************ 00:11:29.025 18:09:39 blockdev_nvme_gpt.bdev_json_nonenclosed -- common/autotest_common.sh@10 -- # set +x 00:11:29.025 18:09:39 blockdev_nvme_gpt -- bdev/blockdev.sh@822 -- # run_test bdev_json_nonarray /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/nonarray.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:11:29.025 18:09:39 blockdev_nvme_gpt -- common/autotest_common.sh@1105 -- # '[' 13 -le 1 ']' 00:11:29.381 18:09:39 blockdev_nvme_gpt -- common/autotest_common.sh@1111 -- # xtrace_disable 00:11:29.381 18:09:39 blockdev_nvme_gpt -- common/autotest_common.sh@10 -- # set +x 00:11:29.381 ************************************ 00:11:29.381 START TEST bdev_json_nonarray 00:11:29.381 ************************************ 00:11:29.381 18:09:39 blockdev_nvme_gpt.bdev_json_nonarray -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/nonarray.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:11:29.381 [2024-12-06 18:09:39.706827] Starting SPDK v25.01-pre git sha1 1148849d6 / DPDK 24.03.0 initialization... 00:11:29.381 [2024-12-06 18:09:39.706958] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid63414 ] 00:11:29.381 [2024-12-06 18:09:39.888531] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:11:29.640 [2024-12-06 18:09:40.004687] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:11:29.640 [2024-12-06 18:09:40.004796] json_config.c: 614:json_config_prepare_ctx: *ERROR*: Invalid JSON configuration: 'subsystems' should be an array. 00:11:29.640 [2024-12-06 18:09:40.004821] rpc.c: 190:spdk_rpc_server_finish: *ERROR*: No server listening on provided address: 00:11:29.640 [2024-12-06 18:09:40.004835] app.c:1064:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:11:29.898 00:11:29.898 real 0m0.646s 00:11:29.898 user 0m0.403s 00:11:29.898 sys 0m0.138s 00:11:29.898 18:09:40 blockdev_nvme_gpt.bdev_json_nonarray -- common/autotest_common.sh@1130 -- # xtrace_disable 00:11:29.898 18:09:40 blockdev_nvme_gpt.bdev_json_nonarray -- common/autotest_common.sh@10 -- # set +x 00:11:29.898 ************************************ 00:11:29.898 END TEST bdev_json_nonarray 00:11:29.898 ************************************ 00:11:29.898 18:09:40 blockdev_nvme_gpt -- bdev/blockdev.sh@824 -- # [[ gpt == bdev ]] 00:11:29.898 18:09:40 blockdev_nvme_gpt -- bdev/blockdev.sh@832 -- # [[ gpt == gpt ]] 00:11:29.899 18:09:40 blockdev_nvme_gpt -- bdev/blockdev.sh@833 -- # run_test bdev_gpt_uuid bdev_gpt_uuid 00:11:29.899 18:09:40 blockdev_nvme_gpt -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:11:29.899 18:09:40 blockdev_nvme_gpt -- common/autotest_common.sh@1111 -- # xtrace_disable 00:11:29.899 18:09:40 blockdev_nvme_gpt -- common/autotest_common.sh@10 -- # set +x 00:11:29.899 ************************************ 00:11:29.899 START TEST bdev_gpt_uuid 00:11:29.899 ************************************ 00:11:29.899 18:09:40 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@1129 -- # bdev_gpt_uuid 00:11:29.899 18:09:40 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@651 -- # local bdev 00:11:29.899 18:09:40 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@653 -- # start_spdk_tgt 00:11:29.899 18:09:40 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@47 -- # spdk_tgt_pid=63439 00:11:29.899 18:09:40 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@46 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt '' '' 00:11:29.899 18:09:40 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@48 -- # trap 'killprocess "$spdk_tgt_pid"; exit 1' SIGINT SIGTERM EXIT 00:11:29.899 18:09:40 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@49 -- # waitforlisten 63439 00:11:29.899 18:09:40 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@835 -- # '[' -z 63439 ']' 00:11:29.899 18:09:40 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:11:29.899 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:11:29.899 18:09:40 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@840 -- # local max_retries=100 00:11:29.899 18:09:40 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:11:29.899 18:09:40 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@844 -- # xtrace_disable 00:11:29.899 18:09:40 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@10 -- # set +x 00:11:29.899 [2024-12-06 18:09:40.444873] Starting SPDK v25.01-pre git sha1 1148849d6 / DPDK 24.03.0 initialization... 00:11:29.899 [2024-12-06 18:09:40.445377] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid63439 ] 00:11:30.158 [2024-12-06 18:09:40.615387] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:11:30.158 [2024-12-06 18:09:40.731542] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:11:31.095 18:09:41 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:11:31.095 18:09:41 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@868 -- # return 0 00:11:31.095 18:09:41 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@655 -- # rpc_cmd load_config -j /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json 00:11:31.095 18:09:41 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:31.095 18:09:41 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@10 -- # set +x 00:11:31.663 Some configs were skipped because the RPC state that can call them passed over. 00:11:31.664 18:09:41 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:31.664 18:09:41 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@656 -- # rpc_cmd bdev_wait_for_examine 00:11:31.664 18:09:41 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:31.664 18:09:41 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@10 -- # set +x 00:11:31.664 18:09:41 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:31.664 18:09:41 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@658 -- # rpc_cmd bdev_get_bdevs -b 6f89f330-603b-4116-ac73-2ca8eae53030 00:11:31.664 18:09:41 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:31.664 18:09:41 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@10 -- # set +x 00:11:31.664 18:09:42 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:31.664 18:09:42 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@658 -- # bdev='[ 00:11:31.664 { 00:11:31.664 "name": "Nvme1n1p1", 00:11:31.664 "aliases": [ 00:11:31.664 "6f89f330-603b-4116-ac73-2ca8eae53030" 00:11:31.664 ], 00:11:31.664 "product_name": "GPT Disk", 00:11:31.664 "block_size": 4096, 00:11:31.664 "num_blocks": 655104, 00:11:31.664 "uuid": "6f89f330-603b-4116-ac73-2ca8eae53030", 00:11:31.664 "assigned_rate_limits": { 00:11:31.664 "rw_ios_per_sec": 0, 00:11:31.664 "rw_mbytes_per_sec": 0, 00:11:31.664 "r_mbytes_per_sec": 0, 00:11:31.664 "w_mbytes_per_sec": 0 00:11:31.664 }, 00:11:31.664 "claimed": false, 00:11:31.664 "zoned": false, 00:11:31.664 "supported_io_types": { 00:11:31.664 "read": true, 00:11:31.664 "write": true, 00:11:31.664 "unmap": true, 00:11:31.664 "flush": true, 00:11:31.664 "reset": true, 00:11:31.664 "nvme_admin": false, 00:11:31.664 "nvme_io": false, 00:11:31.664 "nvme_io_md": false, 00:11:31.664 "write_zeroes": true, 00:11:31.664 "zcopy": false, 00:11:31.664 "get_zone_info": false, 00:11:31.664 "zone_management": false, 00:11:31.664 "zone_append": false, 00:11:31.664 "compare": true, 00:11:31.664 "compare_and_write": false, 00:11:31.664 "abort": true, 00:11:31.664 "seek_hole": false, 00:11:31.664 "seek_data": false, 00:11:31.664 "copy": true, 00:11:31.664 "nvme_iov_md": false 00:11:31.664 }, 00:11:31.664 "driver_specific": { 00:11:31.664 "gpt": { 00:11:31.664 "base_bdev": "Nvme1n1", 00:11:31.664 "offset_blocks": 256, 00:11:31.664 "partition_type_guid": "6527994e-2c5a-4eec-9613-8f5944074e8b", 00:11:31.664 "unique_partition_guid": "6f89f330-603b-4116-ac73-2ca8eae53030", 00:11:31.664 "partition_name": "SPDK_TEST_first" 00:11:31.664 } 00:11:31.664 } 00:11:31.664 } 00:11:31.664 ]' 00:11:31.664 18:09:42 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@659 -- # jq -r length 00:11:31.664 18:09:42 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@659 -- # [[ 1 == \1 ]] 00:11:31.664 18:09:42 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@660 -- # jq -r '.[0].aliases[0]' 00:11:31.664 18:09:42 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@660 -- # [[ 6f89f330-603b-4116-ac73-2ca8eae53030 == \6\f\8\9\f\3\3\0\-\6\0\3\b\-\4\1\1\6\-\a\c\7\3\-\2\c\a\8\e\a\e\5\3\0\3\0 ]] 00:11:31.664 18:09:42 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@661 -- # jq -r '.[0].driver_specific.gpt.unique_partition_guid' 00:11:31.664 18:09:42 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@661 -- # [[ 6f89f330-603b-4116-ac73-2ca8eae53030 == \6\f\8\9\f\3\3\0\-\6\0\3\b\-\4\1\1\6\-\a\c\7\3\-\2\c\a\8\e\a\e\5\3\0\3\0 ]] 00:11:31.664 18:09:42 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@663 -- # rpc_cmd bdev_get_bdevs -b abf1734f-66e5-4c0f-aa29-4021d4d307df 00:11:31.664 18:09:42 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:31.664 18:09:42 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@10 -- # set +x 00:11:31.664 18:09:42 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:31.664 18:09:42 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@663 -- # bdev='[ 00:11:31.664 { 00:11:31.664 "name": "Nvme1n1p2", 00:11:31.664 "aliases": [ 00:11:31.664 "abf1734f-66e5-4c0f-aa29-4021d4d307df" 00:11:31.664 ], 00:11:31.664 "product_name": "GPT Disk", 00:11:31.664 "block_size": 4096, 00:11:31.664 "num_blocks": 655103, 00:11:31.664 "uuid": "abf1734f-66e5-4c0f-aa29-4021d4d307df", 00:11:31.664 "assigned_rate_limits": { 00:11:31.664 "rw_ios_per_sec": 0, 00:11:31.664 "rw_mbytes_per_sec": 0, 00:11:31.664 "r_mbytes_per_sec": 0, 00:11:31.664 "w_mbytes_per_sec": 0 00:11:31.664 }, 00:11:31.664 "claimed": false, 00:11:31.664 "zoned": false, 00:11:31.664 "supported_io_types": { 00:11:31.664 "read": true, 00:11:31.664 "write": true, 00:11:31.664 "unmap": true, 00:11:31.664 "flush": true, 00:11:31.664 "reset": true, 00:11:31.664 "nvme_admin": false, 00:11:31.664 "nvme_io": false, 00:11:31.664 "nvme_io_md": false, 00:11:31.664 "write_zeroes": true, 00:11:31.664 "zcopy": false, 00:11:31.664 "get_zone_info": false, 00:11:31.664 "zone_management": false, 00:11:31.664 "zone_append": false, 00:11:31.664 "compare": true, 00:11:31.664 "compare_and_write": false, 00:11:31.664 "abort": true, 00:11:31.664 "seek_hole": false, 00:11:31.664 "seek_data": false, 00:11:31.664 "copy": true, 00:11:31.664 "nvme_iov_md": false 00:11:31.664 }, 00:11:31.664 "driver_specific": { 00:11:31.664 "gpt": { 00:11:31.664 "base_bdev": "Nvme1n1", 00:11:31.664 "offset_blocks": 655360, 00:11:31.664 "partition_type_guid": "7c5222bd-8f5d-4087-9c00-bf9843c7b58c", 00:11:31.664 "unique_partition_guid": "abf1734f-66e5-4c0f-aa29-4021d4d307df", 00:11:31.664 "partition_name": "SPDK_TEST_second" 00:11:31.664 } 00:11:31.664 } 00:11:31.664 } 00:11:31.664 ]' 00:11:31.664 18:09:42 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@664 -- # jq -r length 00:11:31.664 18:09:42 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@664 -- # [[ 1 == \1 ]] 00:11:31.664 18:09:42 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@665 -- # jq -r '.[0].aliases[0]' 00:11:31.923 18:09:42 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@665 -- # [[ abf1734f-66e5-4c0f-aa29-4021d4d307df == \a\b\f\1\7\3\4\f\-\6\6\e\5\-\4\c\0\f\-\a\a\2\9\-\4\0\2\1\d\4\d\3\0\7\d\f ]] 00:11:31.923 18:09:42 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@666 -- # jq -r '.[0].driver_specific.gpt.unique_partition_guid' 00:11:31.923 18:09:42 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@666 -- # [[ abf1734f-66e5-4c0f-aa29-4021d4d307df == \a\b\f\1\7\3\4\f\-\6\6\e\5\-\4\c\0\f\-\a\a\2\9\-\4\0\2\1\d\4\d\3\0\7\d\f ]] 00:11:31.923 18:09:42 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@668 -- # killprocess 63439 00:11:31.923 18:09:42 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@954 -- # '[' -z 63439 ']' 00:11:31.923 18:09:42 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@958 -- # kill -0 63439 00:11:31.923 18:09:42 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@959 -- # uname 00:11:31.923 18:09:42 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:11:31.923 18:09:42 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 63439 00:11:31.923 18:09:42 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:11:31.923 18:09:42 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:11:31.923 killing process with pid 63439 00:11:31.923 18:09:42 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@972 -- # echo 'killing process with pid 63439' 00:11:31.923 18:09:42 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@973 -- # kill 63439 00:11:31.923 18:09:42 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@978 -- # wait 63439 00:11:34.459 00:11:34.459 real 0m4.395s 00:11:34.459 user 0m4.534s 00:11:34.459 sys 0m0.503s 00:11:34.459 18:09:44 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@1130 -- # xtrace_disable 00:11:34.459 18:09:44 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@10 -- # set +x 00:11:34.459 ************************************ 00:11:34.459 END TEST bdev_gpt_uuid 00:11:34.459 ************************************ 00:11:34.459 18:09:44 blockdev_nvme_gpt -- bdev/blockdev.sh@836 -- # [[ gpt == crypto_sw ]] 00:11:34.459 18:09:44 blockdev_nvme_gpt -- bdev/blockdev.sh@848 -- # trap - SIGINT SIGTERM EXIT 00:11:34.459 18:09:44 blockdev_nvme_gpt -- bdev/blockdev.sh@849 -- # cleanup 00:11:34.459 18:09:44 blockdev_nvme_gpt -- bdev/blockdev.sh@23 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/aiofile 00:11:34.459 18:09:44 blockdev_nvme_gpt -- bdev/blockdev.sh@24 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json 00:11:34.459 18:09:44 blockdev_nvme_gpt -- bdev/blockdev.sh@26 -- # [[ gpt == rbd ]] 00:11:34.459 18:09:44 blockdev_nvme_gpt -- bdev/blockdev.sh@30 -- # [[ gpt == daos ]] 00:11:34.459 18:09:44 blockdev_nvme_gpt -- bdev/blockdev.sh@34 -- # [[ gpt = \g\p\t ]] 00:11:34.459 18:09:44 blockdev_nvme_gpt -- bdev/blockdev.sh@35 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:11:35.054 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:11:35.054 Waiting for block devices as requested 00:11:35.312 0000:00:11.0 (1b36 0010): uio_pci_generic -> nvme 00:11:35.312 0000:00:10.0 (1b36 0010): uio_pci_generic -> nvme 00:11:35.312 0000:00:12.0 (1b36 0010): uio_pci_generic -> nvme 00:11:35.570 0000:00:13.0 (1b36 0010): uio_pci_generic -> nvme 00:11:40.930 * Events for some block/disk devices (0000:00:13.0) were not caught, they may be missing 00:11:40.930 18:09:51 blockdev_nvme_gpt -- bdev/blockdev.sh@36 -- # [[ -b /dev/nvme0n1 ]] 00:11:40.930 18:09:51 blockdev_nvme_gpt -- bdev/blockdev.sh@37 -- # wipefs --all /dev/nvme0n1 00:11:40.930 /dev/nvme0n1: 8 bytes were erased at offset 0x00001000 (gpt): 45 46 49 20 50 41 52 54 00:11:40.930 /dev/nvme0n1: 8 bytes were erased at offset 0x13ffff000 (gpt): 45 46 49 20 50 41 52 54 00:11:40.930 /dev/nvme0n1: 2 bytes were erased at offset 0x000001fe (PMBR): 55 aa 00:11:40.930 /dev/nvme0n1: calling ioctl to re-read partition table: Success 00:11:40.930 18:09:51 blockdev_nvme_gpt -- bdev/blockdev.sh@40 -- # [[ gpt == xnvme ]] 00:11:40.930 00:11:40.930 real 1m6.327s 00:11:40.930 user 1m22.908s 00:11:40.930 sys 0m12.332s 00:11:40.930 18:09:51 blockdev_nvme_gpt -- common/autotest_common.sh@1130 -- # xtrace_disable 00:11:40.930 18:09:51 blockdev_nvme_gpt -- common/autotest_common.sh@10 -- # set +x 00:11:40.930 ************************************ 00:11:40.930 END TEST blockdev_nvme_gpt 00:11:40.930 ************************************ 00:11:40.930 18:09:51 -- spdk/autotest.sh@212 -- # run_test nvme /home/vagrant/spdk_repo/spdk/test/nvme/nvme.sh 00:11:40.930 18:09:51 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:11:40.930 18:09:51 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:11:40.930 18:09:51 -- common/autotest_common.sh@10 -- # set +x 00:11:40.930 ************************************ 00:11:40.930 START TEST nvme 00:11:40.930 ************************************ 00:11:40.930 18:09:51 nvme -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/nvme/nvme.sh 00:11:41.189 * Looking for test storage... 00:11:41.189 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvme 00:11:41.189 18:09:51 nvme -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:11:41.189 18:09:51 nvme -- common/autotest_common.sh@1711 -- # lcov --version 00:11:41.189 18:09:51 nvme -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:11:41.189 18:09:51 nvme -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:11:41.189 18:09:51 nvme -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:11:41.189 18:09:51 nvme -- scripts/common.sh@333 -- # local ver1 ver1_l 00:11:41.189 18:09:51 nvme -- scripts/common.sh@334 -- # local ver2 ver2_l 00:11:41.189 18:09:51 nvme -- scripts/common.sh@336 -- # IFS=.-: 00:11:41.189 18:09:51 nvme -- scripts/common.sh@336 -- # read -ra ver1 00:11:41.189 18:09:51 nvme -- scripts/common.sh@337 -- # IFS=.-: 00:11:41.189 18:09:51 nvme -- scripts/common.sh@337 -- # read -ra ver2 00:11:41.189 18:09:51 nvme -- scripts/common.sh@338 -- # local 'op=<' 00:11:41.189 18:09:51 nvme -- scripts/common.sh@340 -- # ver1_l=2 00:11:41.189 18:09:51 nvme -- scripts/common.sh@341 -- # ver2_l=1 00:11:41.190 18:09:51 nvme -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:11:41.190 18:09:51 nvme -- scripts/common.sh@344 -- # case "$op" in 00:11:41.190 18:09:51 nvme -- scripts/common.sh@345 -- # : 1 00:11:41.190 18:09:51 nvme -- scripts/common.sh@364 -- # (( v = 0 )) 00:11:41.190 18:09:51 nvme -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:11:41.190 18:09:51 nvme -- scripts/common.sh@365 -- # decimal 1 00:11:41.190 18:09:51 nvme -- scripts/common.sh@353 -- # local d=1 00:11:41.190 18:09:51 nvme -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:11:41.190 18:09:51 nvme -- scripts/common.sh@355 -- # echo 1 00:11:41.190 18:09:51 nvme -- scripts/common.sh@365 -- # ver1[v]=1 00:11:41.190 18:09:51 nvme -- scripts/common.sh@366 -- # decimal 2 00:11:41.190 18:09:51 nvme -- scripts/common.sh@353 -- # local d=2 00:11:41.190 18:09:51 nvme -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:11:41.190 18:09:51 nvme -- scripts/common.sh@355 -- # echo 2 00:11:41.190 18:09:51 nvme -- scripts/common.sh@366 -- # ver2[v]=2 00:11:41.190 18:09:51 nvme -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:11:41.190 18:09:51 nvme -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:11:41.190 18:09:51 nvme -- scripts/common.sh@368 -- # return 0 00:11:41.190 18:09:51 nvme -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:11:41.190 18:09:51 nvme -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:11:41.190 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:41.190 --rc genhtml_branch_coverage=1 00:11:41.190 --rc genhtml_function_coverage=1 00:11:41.190 --rc genhtml_legend=1 00:11:41.190 --rc geninfo_all_blocks=1 00:11:41.190 --rc geninfo_unexecuted_blocks=1 00:11:41.190 00:11:41.190 ' 00:11:41.190 18:09:51 nvme -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:11:41.190 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:41.190 --rc genhtml_branch_coverage=1 00:11:41.190 --rc genhtml_function_coverage=1 00:11:41.190 --rc genhtml_legend=1 00:11:41.190 --rc geninfo_all_blocks=1 00:11:41.190 --rc geninfo_unexecuted_blocks=1 00:11:41.190 00:11:41.190 ' 00:11:41.190 18:09:51 nvme -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:11:41.190 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:41.190 --rc genhtml_branch_coverage=1 00:11:41.190 --rc genhtml_function_coverage=1 00:11:41.190 --rc genhtml_legend=1 00:11:41.190 --rc geninfo_all_blocks=1 00:11:41.190 --rc geninfo_unexecuted_blocks=1 00:11:41.190 00:11:41.190 ' 00:11:41.190 18:09:51 nvme -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:11:41.190 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:41.190 --rc genhtml_branch_coverage=1 00:11:41.190 --rc genhtml_function_coverage=1 00:11:41.190 --rc genhtml_legend=1 00:11:41.190 --rc geninfo_all_blocks=1 00:11:41.190 --rc geninfo_unexecuted_blocks=1 00:11:41.190 00:11:41.190 ' 00:11:41.190 18:09:51 nvme -- nvme/nvme.sh@77 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:11:42.127 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:11:42.695 0000:00:11.0 (1b36 0010): nvme -> uio_pci_generic 00:11:42.695 0000:00:13.0 (1b36 0010): nvme -> uio_pci_generic 00:11:42.695 0000:00:10.0 (1b36 0010): nvme -> uio_pci_generic 00:11:42.695 0000:00:12.0 (1b36 0010): nvme -> uio_pci_generic 00:11:42.954 18:09:53 nvme -- nvme/nvme.sh@79 -- # uname 00:11:42.954 18:09:53 nvme -- nvme/nvme.sh@79 -- # '[' Linux = Linux ']' 00:11:42.954 18:09:53 nvme -- nvme/nvme.sh@80 -- # trap 'kill_stub -9; exit 1' SIGINT SIGTERM EXIT 00:11:42.954 18:09:53 nvme -- nvme/nvme.sh@81 -- # start_stub '-s 4096 -i 0 -m 0xE' 00:11:42.954 18:09:53 nvme -- common/autotest_common.sh@1086 -- # _start_stub '-s 4096 -i 0 -m 0xE' 00:11:42.954 18:09:53 nvme -- common/autotest_common.sh@1072 -- # _randomize_va_space=2 00:11:42.954 18:09:53 nvme -- common/autotest_common.sh@1073 -- # echo 0 00:11:42.954 18:09:53 nvme -- common/autotest_common.sh@1075 -- # stubpid=64103 00:11:42.954 Waiting for stub to ready for secondary processes... 00:11:42.954 18:09:53 nvme -- common/autotest_common.sh@1076 -- # echo Waiting for stub to ready for secondary processes... 00:11:42.954 18:09:53 nvme -- common/autotest_common.sh@1077 -- # '[' -e /var/run/spdk_stub0 ']' 00:11:42.954 18:09:53 nvme -- common/autotest_common.sh@1079 -- # [[ -e /proc/64103 ]] 00:11:42.954 18:09:53 nvme -- common/autotest_common.sh@1080 -- # sleep 1s 00:11:42.954 18:09:53 nvme -- common/autotest_common.sh@1074 -- # /home/vagrant/spdk_repo/spdk/test/app/stub/stub -s 4096 -i 0 -m 0xE 00:11:42.954 [2024-12-06 18:09:53.409718] Starting SPDK v25.01-pre git sha1 1148849d6 / DPDK 24.03.0 initialization... 00:11:42.954 [2024-12-06 18:09:53.409847] [ DPDK EAL parameters: stub -c 0xE -m 4096 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto --proc-type=primary ] 00:11:43.890 18:09:54 nvme -- common/autotest_common.sh@1077 -- # '[' -e /var/run/spdk_stub0 ']' 00:11:43.890 18:09:54 nvme -- common/autotest_common.sh@1079 -- # [[ -e /proc/64103 ]] 00:11:43.890 18:09:54 nvme -- common/autotest_common.sh@1080 -- # sleep 1s 00:11:43.890 [2024-12-06 18:09:54.425089] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:11:44.150 [2024-12-06 18:09:54.534748] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:11:44.150 [2024-12-06 18:09:54.534885] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:11:44.150 [2024-12-06 18:09:54.534918] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:11:44.150 [2024-12-06 18:09:54.551960] nvme_cuse.c:1408:start_cuse_thread: *NOTICE*: Successfully started cuse thread to poll for admin commands 00:11:44.150 [2024-12-06 18:09:54.551998] nvme_cuse.c:1220:nvme_cuse_start: *NOTICE*: Creating cuse device for controller 00:11:44.150 [2024-12-06 18:09:54.567568] nvme_cuse.c: 928:cuse_session_create: *NOTICE*: fuse session for device spdk/nvme0 created 00:11:44.150 [2024-12-06 18:09:54.567699] nvme_cuse.c: 928:cuse_session_create: *NOTICE*: fuse session for device spdk/nvme0n1 created 00:11:44.150 [2024-12-06 18:09:54.570905] nvme_cuse.c:1220:nvme_cuse_start: *NOTICE*: Creating cuse device for controller 00:11:44.150 [2024-12-06 18:09:54.571117] nvme_cuse.c: 928:cuse_session_create: *NOTICE*: fuse session for device spdk/nvme1 created 00:11:44.150 [2024-12-06 18:09:54.571180] nvme_cuse.c: 928:cuse_session_create: *NOTICE*: fuse session for device spdk/nvme1n1 created 00:11:44.150 [2024-12-06 18:09:54.574042] nvme_cuse.c:1220:nvme_cuse_start: *NOTICE*: Creating cuse device for controller 00:11:44.150 [2024-12-06 18:09:54.574240] nvme_cuse.c: 928:cuse_session_create: *NOTICE*: fuse session for device spdk/nvme2 created 00:11:44.150 [2024-12-06 18:09:54.574337] nvme_cuse.c: 928:cuse_session_create: *NOTICE*: fuse session for device spdk/nvme2n1 created 00:11:44.150 [2024-12-06 18:09:54.577357] nvme_cuse.c:1220:nvme_cuse_start: *NOTICE*: Creating cuse device for controller 00:11:44.150 [2024-12-06 18:09:54.578120] nvme_cuse.c: 928:cuse_session_create: *NOTICE*: fuse session for device spdk/nvme3 created 00:11:44.150 [2024-12-06 18:09:54.578217] nvme_cuse.c: 928:cuse_session_create: *NOTICE*: fuse session for device spdk/nvme3n1 created 00:11:44.150 [2024-12-06 18:09:54.578298] nvme_cuse.c: 928:cuse_session_create: *NOTICE*: fuse session for device spdk/nvme3n2 created 00:11:44.150 [2024-12-06 18:09:54.578347] nvme_cuse.c: 928:cuse_session_create: *NOTICE*: fuse session for device spdk/nvme3n3 created 00:11:45.083 18:09:55 nvme -- common/autotest_common.sh@1077 -- # '[' -e /var/run/spdk_stub0 ']' 00:11:45.083 done. 00:11:45.083 18:09:55 nvme -- common/autotest_common.sh@1082 -- # echo done. 00:11:45.083 18:09:55 nvme -- nvme/nvme.sh@84 -- # run_test nvme_reset /home/vagrant/spdk_repo/spdk/test/nvme/reset/reset -q 64 -w write -o 4096 -t 5 00:11:45.083 18:09:55 nvme -- common/autotest_common.sh@1105 -- # '[' 10 -le 1 ']' 00:11:45.083 18:09:55 nvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:11:45.083 18:09:55 nvme -- common/autotest_common.sh@10 -- # set +x 00:11:45.083 ************************************ 00:11:45.083 START TEST nvme_reset 00:11:45.083 ************************************ 00:11:45.084 18:09:55 nvme.nvme_reset -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/nvme/reset/reset -q 64 -w write -o 4096 -t 5 00:11:45.084 Initializing NVMe Controllers 00:11:45.084 Skipping QEMU NVMe SSD at 0000:00:10.0 00:11:45.084 Skipping QEMU NVMe SSD at 0000:00:11.0 00:11:45.084 Skipping QEMU NVMe SSD at 0000:00:13.0 00:11:45.084 Skipping QEMU NVMe SSD at 0000:00:12.0 00:11:45.084 No NVMe controller found, /home/vagrant/spdk_repo/spdk/test/nvme/reset/reset exiting 00:11:45.084 00:11:45.084 real 0m0.276s 00:11:45.084 user 0m0.090s 00:11:45.084 sys 0m0.142s 00:11:45.084 18:09:55 nvme.nvme_reset -- common/autotest_common.sh@1130 -- # xtrace_disable 00:11:45.084 18:09:55 nvme.nvme_reset -- common/autotest_common.sh@10 -- # set +x 00:11:45.084 ************************************ 00:11:45.084 END TEST nvme_reset 00:11:45.084 ************************************ 00:11:45.341 18:09:55 nvme -- nvme/nvme.sh@85 -- # run_test nvme_identify nvme_identify 00:11:45.341 18:09:55 nvme -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:11:45.341 18:09:55 nvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:11:45.341 18:09:55 nvme -- common/autotest_common.sh@10 -- # set +x 00:11:45.341 ************************************ 00:11:45.341 START TEST nvme_identify 00:11:45.341 ************************************ 00:11:45.341 18:09:55 nvme.nvme_identify -- common/autotest_common.sh@1129 -- # nvme_identify 00:11:45.341 18:09:55 nvme.nvme_identify -- nvme/nvme.sh@12 -- # bdfs=() 00:11:45.341 18:09:55 nvme.nvme_identify -- nvme/nvme.sh@12 -- # local bdfs bdf 00:11:45.341 18:09:55 nvme.nvme_identify -- nvme/nvme.sh@13 -- # bdfs=($(get_nvme_bdfs)) 00:11:45.341 18:09:55 nvme.nvme_identify -- nvme/nvme.sh@13 -- # get_nvme_bdfs 00:11:45.341 18:09:55 nvme.nvme_identify -- common/autotest_common.sh@1498 -- # bdfs=() 00:11:45.341 18:09:55 nvme.nvme_identify -- common/autotest_common.sh@1498 -- # local bdfs 00:11:45.341 18:09:55 nvme.nvme_identify -- common/autotest_common.sh@1499 -- # bdfs=($("$rootdir/scripts/gen_nvme.sh" | jq -r '.config[].params.traddr')) 00:11:45.341 18:09:55 nvme.nvme_identify -- common/autotest_common.sh@1499 -- # jq -r '.config[].params.traddr' 00:11:45.341 18:09:55 nvme.nvme_identify -- common/autotest_common.sh@1499 -- # /home/vagrant/spdk_repo/spdk/scripts/gen_nvme.sh 00:11:45.341 18:09:55 nvme.nvme_identify -- common/autotest_common.sh@1500 -- # (( 4 == 0 )) 00:11:45.341 18:09:55 nvme.nvme_identify -- common/autotest_common.sh@1504 -- # printf '%s\n' 0000:00:10.0 0000:00:11.0 0000:00:12.0 0000:00:13.0 00:11:45.341 18:09:55 nvme.nvme_identify -- nvme/nvme.sh@14 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_identify -i 0 00:11:45.603 [2024-12-06 18:09:56.087425] nvme_ctrlr.c:3641:nvme_ctrlr_remove_inactive_proc: *ERROR*: [0000:00:10.0, 0] process 64136 terminated unexpected 00:11:45.603 ===================================================== 00:11:45.603 NVMe Controller at 0000:00:10.0 [1b36:0010] 00:11:45.603 ===================================================== 00:11:45.603 Controller Capabilities/Features 00:11:45.603 ================================ 00:11:45.603 Vendor ID: 1b36 00:11:45.603 Subsystem Vendor ID: 1af4 00:11:45.603 Serial Number: 12340 00:11:45.603 Model Number: QEMU NVMe Ctrl 00:11:45.603 Firmware Version: 8.0.0 00:11:45.603 Recommended Arb Burst: 6 00:11:45.603 IEEE OUI Identifier: 00 54 52 00:11:45.603 Multi-path I/O 00:11:45.603 May have multiple subsystem ports: No 00:11:45.603 May have multiple controllers: No 00:11:45.603 Associated with SR-IOV VF: No 00:11:45.603 Max Data Transfer Size: 524288 00:11:45.603 Max Number of Namespaces: 256 00:11:45.603 Max Number of I/O Queues: 64 00:11:45.603 NVMe Specification Version (VS): 1.4 00:11:45.603 NVMe Specification Version (Identify): 1.4 00:11:45.603 Maximum Queue Entries: 2048 00:11:45.603 Contiguous Queues Required: Yes 00:11:45.603 Arbitration Mechanisms Supported 00:11:45.603 Weighted Round Robin: Not Supported 00:11:45.603 Vendor Specific: Not Supported 00:11:45.603 Reset Timeout: 7500 ms 00:11:45.603 Doorbell Stride: 4 bytes 00:11:45.603 NVM Subsystem Reset: Not Supported 00:11:45.603 Command Sets Supported 00:11:45.603 NVM Command Set: Supported 00:11:45.603 Boot Partition: Not Supported 00:11:45.603 Memory Page Size Minimum: 4096 bytes 00:11:45.603 Memory Page Size Maximum: 65536 bytes 00:11:45.603 Persistent Memory Region: Not Supported 00:11:45.603 Optional Asynchronous Events Supported 00:11:45.603 Namespace Attribute Notices: Supported 00:11:45.603 Firmware Activation Notices: Not Supported 00:11:45.603 ANA Change Notices: Not Supported 00:11:45.603 PLE Aggregate Log Change Notices: Not Supported 00:11:45.603 LBA Status Info Alert Notices: Not Supported 00:11:45.603 EGE Aggregate Log Change Notices: Not Supported 00:11:45.603 Normal NVM Subsystem Shutdown event: Not Supported 00:11:45.603 Zone Descriptor Change Notices: Not Supported 00:11:45.603 Discovery Log Change Notices: Not Supported 00:11:45.603 Controller Attributes 00:11:45.603 128-bit Host Identifier: Not Supported 00:11:45.603 Non-Operational Permissive Mode: Not Supported 00:11:45.603 NVM Sets: Not Supported 00:11:45.603 Read Recovery Levels: Not Supported 00:11:45.603 Endurance Groups: Not Supported 00:11:45.603 Predictable Latency Mode: Not Supported 00:11:45.603 Traffic Based Keep ALive: Not Supported 00:11:45.603 Namespace Granularity: Not Supported 00:11:45.603 SQ Associations: Not Supported 00:11:45.603 UUID List: Not Supported 00:11:45.603 Multi-Domain Subsystem: Not Supported 00:11:45.603 Fixed Capacity Management: Not Supported 00:11:45.603 Variable Capacity Management: Not Supported 00:11:45.603 Delete Endurance Group: Not Supported 00:11:45.603 Delete NVM Set: Not Supported 00:11:45.603 Extended LBA Formats Supported: Supported 00:11:45.603 Flexible Data Placement Supported: Not Supported 00:11:45.603 00:11:45.603 Controller Memory Buffer Support 00:11:45.603 ================================ 00:11:45.603 Supported: No 00:11:45.603 00:11:45.603 Persistent Memory Region Support 00:11:45.603 ================================ 00:11:45.603 Supported: No 00:11:45.603 00:11:45.603 Admin Command Set Attributes 00:11:45.603 ============================ 00:11:45.603 Security Send/Receive: Not Supported 00:11:45.603 Format NVM: Supported 00:11:45.603 Firmware Activate/Download: Not Supported 00:11:45.603 Namespace Management: Supported 00:11:45.603 Device Self-Test: Not Supported 00:11:45.603 Directives: Supported 00:11:45.603 NVMe-MI: Not Supported 00:11:45.603 Virtualization Management: Not Supported 00:11:45.603 Doorbell Buffer Config: Supported 00:11:45.603 Get LBA Status Capability: Not Supported 00:11:45.603 Command & Feature Lockdown Capability: Not Supported 00:11:45.603 Abort Command Limit: 4 00:11:45.603 Async Event Request Limit: 4 00:11:45.603 Number of Firmware Slots: N/A 00:11:45.603 Firmware Slot 1 Read-Only: N/A 00:11:45.603 Firmware Activation Without Reset: N/A 00:11:45.603 Multiple Update Detection Support: N/A 00:11:45.603 Firmware Update Granularity: No Information Provided 00:11:45.603 Per-Namespace SMART Log: Yes 00:11:45.603 Asymmetric Namespace Access Log Page: Not Supported 00:11:45.603 Subsystem NQN: nqn.2019-08.org.qemu:12340 00:11:45.603 Command Effects Log Page: Supported 00:11:45.603 Get Log Page Extended Data: Supported 00:11:45.603 Telemetry Log Pages: Not Supported 00:11:45.603 Persistent Event Log Pages: Not Supported 00:11:45.603 Supported Log Pages Log Page: May Support 00:11:45.603 Commands Supported & Effects Log Page: Not Supported 00:11:45.603 Feature Identifiers & Effects Log Page:May Support 00:11:45.603 NVMe-MI Commands & Effects Log Page: May Support 00:11:45.603 Data Area 4 for Telemetry Log: Not Supported 00:11:45.603 Error Log Page Entries Supported: 1 00:11:45.603 Keep Alive: Not Supported 00:11:45.603 00:11:45.603 NVM Command Set Attributes 00:11:45.603 ========================== 00:11:45.603 Submission Queue Entry Size 00:11:45.603 Max: 64 00:11:45.603 Min: 64 00:11:45.603 Completion Queue Entry Size 00:11:45.603 Max: 16 00:11:45.603 Min: 16 00:11:45.603 Number of Namespaces: 256 00:11:45.603 Compare Command: Supported 00:11:45.603 Write Uncorrectable Command: Not Supported 00:11:45.603 Dataset Management Command: Supported 00:11:45.603 Write Zeroes Command: Supported 00:11:45.603 Set Features Save Field: Supported 00:11:45.603 Reservations: Not Supported 00:11:45.603 Timestamp: Supported 00:11:45.603 Copy: Supported 00:11:45.603 Volatile Write Cache: Present 00:11:45.603 Atomic Write Unit (Normal): 1 00:11:45.603 Atomic Write Unit (PFail): 1 00:11:45.603 Atomic Compare & Write Unit: 1 00:11:45.603 Fused Compare & Write: Not Supported 00:11:45.603 Scatter-Gather List 00:11:45.603 SGL Command Set: Supported 00:11:45.603 SGL Keyed: Not Supported 00:11:45.603 SGL Bit Bucket Descriptor: Not Supported 00:11:45.603 SGL Metadata Pointer: Not Supported 00:11:45.603 Oversized SGL: Not Supported 00:11:45.603 SGL Metadata Address: Not Supported 00:11:45.603 SGL Offset: Not Supported 00:11:45.603 Transport SGL Data Block: Not Supported 00:11:45.603 Replay Protected Memory Block: Not Supported 00:11:45.603 00:11:45.603 Firmware Slot Information 00:11:45.603 ========================= 00:11:45.603 Active slot: 1 00:11:45.603 Slot 1 Firmware Revision: 1.0 00:11:45.603 00:11:45.603 00:11:45.603 Commands Supported and Effects 00:11:45.603 ============================== 00:11:45.603 Admin Commands 00:11:45.603 -------------- 00:11:45.603 Delete I/O Submission Queue (00h): Supported 00:11:45.603 Create I/O Submission Queue (01h): Supported 00:11:45.603 Get Log Page (02h): Supported 00:11:45.603 Delete I/O Completion Queue (04h): Supported 00:11:45.603 Create I/O Completion Queue (05h): Supported 00:11:45.603 Identify (06h): Supported 00:11:45.603 Abort (08h): Supported 00:11:45.603 Set Features (09h): Supported 00:11:45.603 Get Features (0Ah): Supported 00:11:45.603 Asynchronous Event Request (0Ch): Supported 00:11:45.603 Namespace Attachment (15h): Supported NS-Inventory-Change 00:11:45.603 Directive Send (19h): Supported 00:11:45.603 Directive Receive (1Ah): Supported 00:11:45.603 Virtualization Management (1Ch): Supported 00:11:45.603 Doorbell Buffer Config (7Ch): Supported 00:11:45.603 Format NVM (80h): Supported LBA-Change 00:11:45.603 I/O Commands 00:11:45.603 ------------ 00:11:45.603 Flush (00h): Supported LBA-Change 00:11:45.603 Write (01h): Supported LBA-Change 00:11:45.603 Read (02h): Supported 00:11:45.603 Compare (05h): Supported 00:11:45.603 Write Zeroes (08h): Supported LBA-Change 00:11:45.603 Dataset Management (09h): Supported LBA-Change 00:11:45.603 Unknown (0Ch): Supported 00:11:45.603 Unknown (12h): Supported 00:11:45.603 Copy (19h): Supported LBA-Change 00:11:45.603 Unknown (1Dh): Supported LBA-Change 00:11:45.603 00:11:45.603 Error Log 00:11:45.603 ========= 00:11:45.603 00:11:45.603 Arbitration 00:11:45.603 =========== 00:11:45.603 Arbitration Burst: no limit 00:11:45.603 00:11:45.603 Power Management 00:11:45.603 ================ 00:11:45.603 Number of Power States: 1 00:11:45.603 Current Power State: Power State #0 00:11:45.603 Power State #0: 00:11:45.603 Max Power: 25.00 W 00:11:45.603 Non-Operational State: Operational 00:11:45.603 Entry Latency: 16 microseconds 00:11:45.603 Exit Latency: 4 microseconds 00:11:45.603 Relative Read Throughput: 0 00:11:45.603 Relative Read Latency: 0 00:11:45.603 Relative Write Throughput: 0 00:11:45.603 Relative Write Latency: 0 00:11:45.603 Idle Power[2024-12-06 18:09:56.088739] nvme_ctrlr.c:3641:nvme_ctrlr_remove_inactive_proc: *ERROR*: [0000:00:11.0, 0] process 64136 terminated unexpected 00:11:45.603 : Not Reported 00:11:45.603 Active Power: Not Reported 00:11:45.603 Non-Operational Permissive Mode: Not Supported 00:11:45.603 00:11:45.603 Health Information 00:11:45.603 ================== 00:11:45.603 Critical Warnings: 00:11:45.603 Available Spare Space: OK 00:11:45.603 Temperature: OK 00:11:45.603 Device Reliability: OK 00:11:45.604 Read Only: No 00:11:45.604 Volatile Memory Backup: OK 00:11:45.604 Current Temperature: 323 Kelvin (50 Celsius) 00:11:45.604 Temperature Threshold: 343 Kelvin (70 Celsius) 00:11:45.604 Available Spare: 0% 00:11:45.604 Available Spare Threshold: 0% 00:11:45.604 Life Percentage Used: 0% 00:11:45.604 Data Units Read: 794 00:11:45.604 Data Units Written: 722 00:11:45.604 Host Read Commands: 37180 00:11:45.604 Host Write Commands: 36966 00:11:45.604 Controller Busy Time: 0 minutes 00:11:45.604 Power Cycles: 0 00:11:45.604 Power On Hours: 0 hours 00:11:45.604 Unsafe Shutdowns: 0 00:11:45.604 Unrecoverable Media Errors: 0 00:11:45.604 Lifetime Error Log Entries: 0 00:11:45.604 Warning Temperature Time: 0 minutes 00:11:45.604 Critical Temperature Time: 0 minutes 00:11:45.604 00:11:45.604 Number of Queues 00:11:45.604 ================ 00:11:45.604 Number of I/O Submission Queues: 64 00:11:45.604 Number of I/O Completion Queues: 64 00:11:45.604 00:11:45.604 ZNS Specific Controller Data 00:11:45.604 ============================ 00:11:45.604 Zone Append Size Limit: 0 00:11:45.604 00:11:45.604 00:11:45.604 Active Namespaces 00:11:45.604 ================= 00:11:45.604 Namespace ID:1 00:11:45.604 Error Recovery Timeout: Unlimited 00:11:45.604 Command Set Identifier: NVM (00h) 00:11:45.604 Deallocate: Supported 00:11:45.604 Deallocated/Unwritten Error: Supported 00:11:45.604 Deallocated Read Value: All 0x00 00:11:45.604 Deallocate in Write Zeroes: Not Supported 00:11:45.604 Deallocated Guard Field: 0xFFFF 00:11:45.604 Flush: Supported 00:11:45.604 Reservation: Not Supported 00:11:45.604 Metadata Transferred as: Separate Metadata Buffer 00:11:45.604 Namespace Sharing Capabilities: Private 00:11:45.604 Size (in LBAs): 1548666 (5GiB) 00:11:45.604 Capacity (in LBAs): 1548666 (5GiB) 00:11:45.604 Utilization (in LBAs): 1548666 (5GiB) 00:11:45.604 Thin Provisioning: Not Supported 00:11:45.604 Per-NS Atomic Units: No 00:11:45.604 Maximum Single Source Range Length: 128 00:11:45.604 Maximum Copy Length: 128 00:11:45.604 Maximum Source Range Count: 128 00:11:45.604 NGUID/EUI64 Never Reused: No 00:11:45.604 Namespace Write Protected: No 00:11:45.604 Number of LBA Formats: 8 00:11:45.604 Current LBA Format: LBA Format #07 00:11:45.604 LBA Format #00: Data Size: 512 Metadata Size: 0 00:11:45.604 LBA Format #01: Data Size: 512 Metadata Size: 8 00:11:45.604 LBA Format #02: Data Size: 512 Metadata Size: 16 00:11:45.604 LBA Format #03: Data Size: 512 Metadata Size: 64 00:11:45.604 LBA Format #04: Data Size: 4096 Metadata Size: 0 00:11:45.604 LBA Format #05: Data Size: 4096 Metadata Size: 8 00:11:45.604 LBA Format #06: Data Size: 4096 Metadata Size: 16 00:11:45.604 LBA Format #07: Data Size: 4096 Metadata Size: 64 00:11:45.604 00:11:45.604 NVM Specific Namespace Data 00:11:45.604 =========================== 00:11:45.604 Logical Block Storage Tag Mask: 0 00:11:45.604 Protection Information Capabilities: 00:11:45.604 16b Guard Protection Information Storage Tag Support: No 00:11:45.604 16b Guard Protection Information Storage Tag Mask: Any bit in LBSTM can be 0 00:11:45.604 Storage Tag Check Read Support: No 00:11:45.604 Extended LBA Format #00: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:11:45.604 Extended LBA Format #01: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:11:45.604 Extended LBA Format #02: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:11:45.604 Extended LBA Format #03: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:11:45.604 Extended LBA Format #04: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:11:45.604 Extended LBA Format #05: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:11:45.604 Extended LBA Format #06: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:11:45.604 Extended LBA Format #07: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:11:45.604 ===================================================== 00:11:45.604 NVMe Controller at 0000:00:11.0 [1b36:0010] 00:11:45.604 ===================================================== 00:11:45.604 Controller Capabilities/Features 00:11:45.604 ================================ 00:11:45.604 Vendor ID: 1b36 00:11:45.604 Subsystem Vendor ID: 1af4 00:11:45.604 Serial Number: 12341 00:11:45.604 Model Number: QEMU NVMe Ctrl 00:11:45.604 Firmware Version: 8.0.0 00:11:45.604 Recommended Arb Burst: 6 00:11:45.604 IEEE OUI Identifier: 00 54 52 00:11:45.604 Multi-path I/O 00:11:45.604 May have multiple subsystem ports: No 00:11:45.604 May have multiple controllers: No 00:11:45.604 Associated with SR-IOV VF: No 00:11:45.604 Max Data Transfer Size: 524288 00:11:45.604 Max Number of Namespaces: 256 00:11:45.604 Max Number of I/O Queues: 64 00:11:45.604 NVMe Specification Version (VS): 1.4 00:11:45.604 NVMe Specification Version (Identify): 1.4 00:11:45.604 Maximum Queue Entries: 2048 00:11:45.604 Contiguous Queues Required: Yes 00:11:45.604 Arbitration Mechanisms Supported 00:11:45.604 Weighted Round Robin: Not Supported 00:11:45.604 Vendor Specific: Not Supported 00:11:45.604 Reset Timeout: 7500 ms 00:11:45.604 Doorbell Stride: 4 bytes 00:11:45.604 NVM Subsystem Reset: Not Supported 00:11:45.604 Command Sets Supported 00:11:45.604 NVM Command Set: Supported 00:11:45.604 Boot Partition: Not Supported 00:11:45.604 Memory Page Size Minimum: 4096 bytes 00:11:45.604 Memory Page Size Maximum: 65536 bytes 00:11:45.604 Persistent Memory Region: Not Supported 00:11:45.604 Optional Asynchronous Events Supported 00:11:45.604 Namespace Attribute Notices: Supported 00:11:45.604 Firmware Activation Notices: Not Supported 00:11:45.604 ANA Change Notices: Not Supported 00:11:45.604 PLE Aggregate Log Change Notices: Not Supported 00:11:45.604 LBA Status Info Alert Notices: Not Supported 00:11:45.604 EGE Aggregate Log Change Notices: Not Supported 00:11:45.604 Normal NVM Subsystem Shutdown event: Not Supported 00:11:45.604 Zone Descriptor Change Notices: Not Supported 00:11:45.604 Discovery Log Change Notices: Not Supported 00:11:45.604 Controller Attributes 00:11:45.604 128-bit Host Identifier: Not Supported 00:11:45.604 Non-Operational Permissive Mode: Not Supported 00:11:45.604 NVM Sets: Not Supported 00:11:45.604 Read Recovery Levels: Not Supported 00:11:45.604 Endurance Groups: Not Supported 00:11:45.604 Predictable Latency Mode: Not Supported 00:11:45.604 Traffic Based Keep ALive: Not Supported 00:11:45.604 Namespace Granularity: Not Supported 00:11:45.604 SQ Associations: Not Supported 00:11:45.604 UUID List: Not Supported 00:11:45.604 Multi-Domain Subsystem: Not Supported 00:11:45.604 Fixed Capacity Management: Not Supported 00:11:45.604 Variable Capacity Management: Not Supported 00:11:45.604 Delete Endurance Group: Not Supported 00:11:45.604 Delete NVM Set: Not Supported 00:11:45.604 Extended LBA Formats Supported: Supported 00:11:45.604 Flexible Data Placement Supported: Not Supported 00:11:45.604 00:11:45.604 Controller Memory Buffer Support 00:11:45.604 ================================ 00:11:45.604 Supported: No 00:11:45.604 00:11:45.604 Persistent Memory Region Support 00:11:45.604 ================================ 00:11:45.604 Supported: No 00:11:45.604 00:11:45.604 Admin Command Set Attributes 00:11:45.604 ============================ 00:11:45.604 Security Send/Receive: Not Supported 00:11:45.604 Format NVM: Supported 00:11:45.604 Firmware Activate/Download: Not Supported 00:11:45.604 Namespace Management: Supported 00:11:45.604 Device Self-Test: Not Supported 00:11:45.604 Directives: Supported 00:11:45.604 NVMe-MI: Not Supported 00:11:45.604 Virtualization Management: Not Supported 00:11:45.604 Doorbell Buffer Config: Supported 00:11:45.604 Get LBA Status Capability: Not Supported 00:11:45.604 Command & Feature Lockdown Capability: Not Supported 00:11:45.604 Abort Command Limit: 4 00:11:45.604 Async Event Request Limit: 4 00:11:45.604 Number of Firmware Slots: N/A 00:11:45.604 Firmware Slot 1 Read-Only: N/A 00:11:45.604 Firmware Activation Without Reset: N/A 00:11:45.604 Multiple Update Detection Support: N/A 00:11:45.604 Firmware Update Granularity: No Information Provided 00:11:45.604 Per-Namespace SMART Log: Yes 00:11:45.604 Asymmetric Namespace Access Log Page: Not Supported 00:11:45.604 Subsystem NQN: nqn.2019-08.org.qemu:12341 00:11:45.604 Command Effects Log Page: Supported 00:11:45.604 Get Log Page Extended Data: Supported 00:11:45.604 Telemetry Log Pages: Not Supported 00:11:45.604 Persistent Event Log Pages: Not Supported 00:11:45.604 Supported Log Pages Log Page: May Support 00:11:45.604 Commands Supported & Effects Log Page: Not Supported 00:11:45.604 Feature Identifiers & Effects Log Page:May Support 00:11:45.604 NVMe-MI Commands & Effects Log Page: May Support 00:11:45.604 Data Area 4 for Telemetry Log: Not Supported 00:11:45.604 Error Log Page Entries Supported: 1 00:11:45.604 Keep Alive: Not Supported 00:11:45.604 00:11:45.604 NVM Command Set Attributes 00:11:45.604 ========================== 00:11:45.604 Submission Queue Entry Size 00:11:45.604 Max: 64 00:11:45.604 Min: 64 00:11:45.604 Completion Queue Entry Size 00:11:45.604 Max: 16 00:11:45.604 Min: 16 00:11:45.604 Number of Namespaces: 256 00:11:45.604 Compare Command: Supported 00:11:45.604 Write Uncorrectable Command: Not Supported 00:11:45.604 Dataset Management Command: Supported 00:11:45.604 Write Zeroes Command: Supported 00:11:45.604 Set Features Save Field: Supported 00:11:45.604 Reservations: Not Supported 00:11:45.604 Timestamp: Supported 00:11:45.604 Copy: Supported 00:11:45.604 Volatile Write Cache: Present 00:11:45.604 Atomic Write Unit (Normal): 1 00:11:45.604 Atomic Write Unit (PFail): 1 00:11:45.604 Atomic Compare & Write Unit: 1 00:11:45.604 Fused Compare & Write: Not Supported 00:11:45.604 Scatter-Gather List 00:11:45.604 SGL Command Set: Supported 00:11:45.604 SGL Keyed: Not Supported 00:11:45.604 SGL Bit Bucket Descriptor: Not Supported 00:11:45.604 SGL Metadata Pointer: Not Supported 00:11:45.604 Oversized SGL: Not Supported 00:11:45.604 SGL Metadata Address: Not Supported 00:11:45.604 SGL Offset: Not Supported 00:11:45.604 Transport SGL Data Block: Not Supported 00:11:45.604 Replay Protected Memory Block: Not Supported 00:11:45.604 00:11:45.604 Firmware Slot Information 00:11:45.604 ========================= 00:11:45.604 Active slot: 1 00:11:45.604 Slot 1 Firmware Revision: 1.0 00:11:45.604 00:11:45.604 00:11:45.604 Commands Supported and Effects 00:11:45.604 ============================== 00:11:45.604 Admin Commands 00:11:45.604 -------------- 00:11:45.604 Delete I/O Submission Queue (00h): Supported 00:11:45.604 Create I/O Submission Queue (01h): Supported 00:11:45.604 Get Log Page (02h): Supported 00:11:45.604 Delete I/O Completion Queue (04h): Supported 00:11:45.604 Create I/O Completion Queue (05h): Supported 00:11:45.604 Identify (06h): Supported 00:11:45.604 Abort (08h): Supported 00:11:45.604 Set Features (09h): Supported 00:11:45.604 Get Features (0Ah): Supported 00:11:45.604 Asynchronous Event Request (0Ch): Supported 00:11:45.604 Namespace Attachment (15h): Supported NS-Inventory-Change 00:11:45.604 Directive Send (19h): Supported 00:11:45.604 Directive Receive (1Ah): Supported 00:11:45.605 Virtualization Management (1Ch): Supported 00:11:45.605 Doorbell Buffer Config (7Ch): Supported 00:11:45.605 Format NVM (80h): Supported LBA-Change 00:11:45.605 I/O Commands 00:11:45.605 ------------ 00:11:45.605 Flush (00h): Supported LBA-Change 00:11:45.605 Write (01h): Supported LBA-Change 00:11:45.605 Read (02h): Supported 00:11:45.605 Compare (05h): Supported 00:11:45.605 Write Zeroes (08h): Supported LBA-Change 00:11:45.605 Dataset Management (09h): Supported LBA-Change 00:11:45.605 Unknown (0Ch): Supported 00:11:45.605 Unknown (12h): Supported 00:11:45.605 Copy (19h): Supported LBA-Change 00:11:45.605 Unknown (1Dh): Supported LBA-Change 00:11:45.605 00:11:45.605 Error Log 00:11:45.605 ========= 00:11:45.605 00:11:45.605 Arbitration 00:11:45.605 =========== 00:11:45.605 Arbitration Burst: no limit 00:11:45.605 00:11:45.605 Power Management 00:11:45.605 ================ 00:11:45.605 Number of Power States: 1 00:11:45.605 Current Power State: Power State #0 00:11:45.605 Power State #0: 00:11:45.605 Max Power: 25.00 W 00:11:45.605 Non-Operational State: Operational 00:11:45.605 Entry Latency: 16 microseconds 00:11:45.605 Exit Latency: 4 microseconds 00:11:45.605 Relative Read Throughput: 0 00:11:45.605 Relative Read Latency: 0 00:11:45.605 Relative Write Throughput: 0 00:11:45.605 Relative Write Latency: 0 00:11:45.605 Idle Power: Not Reported 00:11:45.605 Active Power: Not Reported 00:11:45.605 Non-Operational Permissive Mode: Not Supported 00:11:45.605 00:11:45.605 Health Information 00:11:45.605 ================== 00:11:45.605 Critical Warnings: 00:11:45.605 Available Spare Space: OK 00:11:45.605 Temperature: [2024-12-06 18:09:56.089744] nvme_ctrlr.c:3641:nvme_ctrlr_remove_inactive_proc: *ERROR*: [0000:00:13.0, 0] process 64136 terminated unexpected 00:11:45.605 OK 00:11:45.605 Device Reliability: OK 00:11:45.605 Read Only: No 00:11:45.605 Volatile Memory Backup: OK 00:11:45.605 Current Temperature: 323 Kelvin (50 Celsius) 00:11:45.605 Temperature Threshold: 343 Kelvin (70 Celsius) 00:11:45.605 Available Spare: 0% 00:11:45.605 Available Spare Threshold: 0% 00:11:45.605 Life Percentage Used: 0% 00:11:45.605 Data Units Read: 1197 00:11:45.605 Data Units Written: 1064 00:11:45.605 Host Read Commands: 55067 00:11:45.605 Host Write Commands: 53859 00:11:45.605 Controller Busy Time: 0 minutes 00:11:45.605 Power Cycles: 0 00:11:45.605 Power On Hours: 0 hours 00:11:45.605 Unsafe Shutdowns: 0 00:11:45.605 Unrecoverable Media Errors: 0 00:11:45.605 Lifetime Error Log Entries: 0 00:11:45.605 Warning Temperature Time: 0 minutes 00:11:45.605 Critical Temperature Time: 0 minutes 00:11:45.605 00:11:45.605 Number of Queues 00:11:45.605 ================ 00:11:45.605 Number of I/O Submission Queues: 64 00:11:45.605 Number of I/O Completion Queues: 64 00:11:45.605 00:11:45.605 ZNS Specific Controller Data 00:11:45.605 ============================ 00:11:45.605 Zone Append Size Limit: 0 00:11:45.605 00:11:45.605 00:11:45.605 Active Namespaces 00:11:45.605 ================= 00:11:45.605 Namespace ID:1 00:11:45.605 Error Recovery Timeout: Unlimited 00:11:45.605 Command Set Identifier: NVM (00h) 00:11:45.605 Deallocate: Supported 00:11:45.605 Deallocated/Unwritten Error: Supported 00:11:45.605 Deallocated Read Value: All 0x00 00:11:45.605 Deallocate in Write Zeroes: Not Supported 00:11:45.605 Deallocated Guard Field: 0xFFFF 00:11:45.605 Flush: Supported 00:11:45.605 Reservation: Not Supported 00:11:45.605 Namespace Sharing Capabilities: Private 00:11:45.605 Size (in LBAs): 1310720 (5GiB) 00:11:45.605 Capacity (in LBAs): 1310720 (5GiB) 00:11:45.605 Utilization (in LBAs): 1310720 (5GiB) 00:11:45.605 Thin Provisioning: Not Supported 00:11:45.605 Per-NS Atomic Units: No 00:11:45.605 Maximum Single Source Range Length: 128 00:11:45.605 Maximum Copy Length: 128 00:11:45.605 Maximum Source Range Count: 128 00:11:45.605 NGUID/EUI64 Never Reused: No 00:11:45.605 Namespace Write Protected: No 00:11:45.605 Number of LBA Formats: 8 00:11:45.605 Current LBA Format: LBA Format #04 00:11:45.605 LBA Format #00: Data Size: 512 Metadata Size: 0 00:11:45.605 LBA Format #01: Data Size: 512 Metadata Size: 8 00:11:45.605 LBA Format #02: Data Size: 512 Metadata Size: 16 00:11:45.605 LBA Format #03: Data Size: 512 Metadata Size: 64 00:11:45.605 LBA Format #04: Data Size: 4096 Metadata Size: 0 00:11:45.605 LBA Format #05: Data Size: 4096 Metadata Size: 8 00:11:45.605 LBA Format #06: Data Size: 4096 Metadata Size: 16 00:11:45.605 LBA Format #07: Data Size: 4096 Metadata Size: 64 00:11:45.605 00:11:45.605 NVM Specific Namespace Data 00:11:45.605 =========================== 00:11:45.605 Logical Block Storage Tag Mask: 0 00:11:45.605 Protection Information Capabilities: 00:11:45.605 16b Guard Protection Information Storage Tag Support: No 00:11:45.605 16b Guard Protection Information Storage Tag Mask: Any bit in LBSTM can be 0 00:11:45.605 Storage Tag Check Read Support: No 00:11:45.605 Extended LBA Format #00: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:11:45.605 Extended LBA Format #01: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:11:45.605 Extended LBA Format #02: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:11:45.605 Extended LBA Format #03: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:11:45.605 Extended LBA Format #04: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:11:45.605 Extended LBA Format #05: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:11:45.605 Extended LBA Format #06: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:11:45.605 Extended LBA Format #07: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:11:45.605 ===================================================== 00:11:45.605 NVMe Controller at 0000:00:13.0 [1b36:0010] 00:11:45.605 ===================================================== 00:11:45.605 Controller Capabilities/Features 00:11:45.605 ================================ 00:11:45.605 Vendor ID: 1b36 00:11:45.605 Subsystem Vendor ID: 1af4 00:11:45.605 Serial Number: 12343 00:11:45.605 Model Number: QEMU NVMe Ctrl 00:11:45.605 Firmware Version: 8.0.0 00:11:45.605 Recommended Arb Burst: 6 00:11:45.605 IEEE OUI Identifier: 00 54 52 00:11:45.605 Multi-path I/O 00:11:45.605 May have multiple subsystem ports: No 00:11:45.605 May have multiple controllers: Yes 00:11:45.605 Associated with SR-IOV VF: No 00:11:45.605 Max Data Transfer Size: 524288 00:11:45.605 Max Number of Namespaces: 256 00:11:45.605 Max Number of I/O Queues: 64 00:11:45.605 NVMe Specification Version (VS): 1.4 00:11:45.605 NVMe Specification Version (Identify): 1.4 00:11:45.605 Maximum Queue Entries: 2048 00:11:45.605 Contiguous Queues Required: Yes 00:11:45.605 Arbitration Mechanisms Supported 00:11:45.605 Weighted Round Robin: Not Supported 00:11:45.605 Vendor Specific: Not Supported 00:11:45.605 Reset Timeout: 7500 ms 00:11:45.605 Doorbell Stride: 4 bytes 00:11:45.605 NVM Subsystem Reset: Not Supported 00:11:45.605 Command Sets Supported 00:11:45.605 NVM Command Set: Supported 00:11:45.605 Boot Partition: Not Supported 00:11:45.605 Memory Page Size Minimum: 4096 bytes 00:11:45.605 Memory Page Size Maximum: 65536 bytes 00:11:45.605 Persistent Memory Region: Not Supported 00:11:45.605 Optional Asynchronous Events Supported 00:11:45.605 Namespace Attribute Notices: Supported 00:11:45.605 Firmware Activation Notices: Not Supported 00:11:45.605 ANA Change Notices: Not Supported 00:11:45.605 PLE Aggregate Log Change Notices: Not Supported 00:11:45.605 LBA Status Info Alert Notices: Not Supported 00:11:45.605 EGE Aggregate Log Change Notices: Not Supported 00:11:45.605 Normal NVM Subsystem Shutdown event: Not Supported 00:11:45.605 Zone Descriptor Change Notices: Not Supported 00:11:45.605 Discovery Log Change Notices: Not Supported 00:11:45.605 Controller Attributes 00:11:45.605 128-bit Host Identifier: Not Supported 00:11:45.605 Non-Operational Permissive Mode: Not Supported 00:11:45.605 NVM Sets: Not Supported 00:11:45.605 Read Recovery Levels: Not Supported 00:11:45.605 Endurance Groups: Supported 00:11:45.605 Predictable Latency Mode: Not Supported 00:11:45.605 Traffic Based Keep ALive: Not Supported 00:11:45.605 Namespace Granularity: Not Supported 00:11:45.605 SQ Associations: Not Supported 00:11:45.605 UUID List: Not Supported 00:11:45.605 Multi-Domain Subsystem: Not Supported 00:11:45.605 Fixed Capacity Management: Not Supported 00:11:45.605 Variable Capacity Management: Not Supported 00:11:45.605 Delete Endurance Group: Not Supported 00:11:45.605 Delete NVM Set: Not Supported 00:11:45.605 Extended LBA Formats Supported: Supported 00:11:45.605 Flexible Data Placement Supported: Supported 00:11:45.605 00:11:45.605 Controller Memory Buffer Support 00:11:45.605 ================================ 00:11:45.605 Supported: No 00:11:45.605 00:11:45.605 Persistent Memory Region Support 00:11:45.605 ================================ 00:11:45.605 Supported: No 00:11:45.605 00:11:45.605 Admin Command Set Attributes 00:11:45.605 ============================ 00:11:45.605 Security Send/Receive: Not Supported 00:11:45.605 Format NVM: Supported 00:11:45.605 Firmware Activate/Download: Not Supported 00:11:45.605 Namespace Management: Supported 00:11:45.605 Device Self-Test: Not Supported 00:11:45.605 Directives: Supported 00:11:45.605 NVMe-MI: Not Supported 00:11:45.605 Virtualization Management: Not Supported 00:11:45.605 Doorbell Buffer Config: Supported 00:11:45.605 Get LBA Status Capability: Not Supported 00:11:45.605 Command & Feature Lockdown Capability: Not Supported 00:11:45.605 Abort Command Limit: 4 00:11:45.605 Async Event Request Limit: 4 00:11:45.605 Number of Firmware Slots: N/A 00:11:45.605 Firmware Slot 1 Read-Only: N/A 00:11:45.605 Firmware Activation Without Reset: N/A 00:11:45.605 Multiple Update Detection Support: N/A 00:11:45.605 Firmware Update Granularity: No Information Provided 00:11:45.605 Per-Namespace SMART Log: Yes 00:11:45.605 Asymmetric Namespace Access Log Page: Not Supported 00:11:45.605 Subsystem NQN: nqn.2019-08.org.qemu:fdp-subsys3 00:11:45.605 Command Effects Log Page: Supported 00:11:45.605 Get Log Page Extended Data: Supported 00:11:45.605 Telemetry Log Pages: Not Supported 00:11:45.605 Persistent Event Log Pages: Not Supported 00:11:45.605 Supported Log Pages Log Page: May Support 00:11:45.605 Commands Supported & Effects Log Page: Not Supported 00:11:45.605 Feature Identifiers & Effects Log Page:May Support 00:11:45.605 NVMe-MI Commands & Effects Log Page: May Support 00:11:45.605 Data Area 4 for Telemetry Log: Not Supported 00:11:45.605 Error Log Page Entries Supported: 1 00:11:45.605 Keep Alive: Not Supported 00:11:45.605 00:11:45.605 NVM Command Set Attributes 00:11:45.605 ========================== 00:11:45.605 Submission Queue Entry Size 00:11:45.605 Max: 64 00:11:45.605 Min: 64 00:11:45.605 Completion Queue Entry Size 00:11:45.605 Max: 16 00:11:45.605 Min: 16 00:11:45.605 Number of Namespaces: 256 00:11:45.605 Compare Command: Supported 00:11:45.605 Write Uncorrectable Command: Not Supported 00:11:45.605 Dataset Management Command: Supported 00:11:45.605 Write Zeroes Command: Supported 00:11:45.605 Set Features Save Field: Supported 00:11:45.605 Reservations: Not Supported 00:11:45.605 Timestamp: Supported 00:11:45.605 Copy: Supported 00:11:45.606 Volatile Write Cache: Present 00:11:45.606 Atomic Write Unit (Normal): 1 00:11:45.606 Atomic Write Unit (PFail): 1 00:11:45.606 Atomic Compare & Write Unit: 1 00:11:45.606 Fused Compare & Write: Not Supported 00:11:45.606 Scatter-Gather List 00:11:45.606 SGL Command Set: Supported 00:11:45.606 SGL Keyed: Not Supported 00:11:45.606 SGL Bit Bucket Descriptor: Not Supported 00:11:45.606 SGL Metadata Pointer: Not Supported 00:11:45.606 Oversized SGL: Not Supported 00:11:45.606 SGL Metadata Address: Not Supported 00:11:45.606 SGL Offset: Not Supported 00:11:45.606 Transport SGL Data Block: Not Supported 00:11:45.606 Replay Protected Memory Block: Not Supported 00:11:45.606 00:11:45.606 Firmware Slot Information 00:11:45.606 ========================= 00:11:45.606 Active slot: 1 00:11:45.606 Slot 1 Firmware Revision: 1.0 00:11:45.606 00:11:45.606 00:11:45.606 Commands Supported and Effects 00:11:45.606 ============================== 00:11:45.606 Admin Commands 00:11:45.606 -------------- 00:11:45.606 Delete I/O Submission Queue (00h): Supported 00:11:45.606 Create I/O Submission Queue (01h): Supported 00:11:45.606 Get Log Page (02h): Supported 00:11:45.606 Delete I/O Completion Queue (04h): Supported 00:11:45.606 Create I/O Completion Queue (05h): Supported 00:11:45.606 Identify (06h): Supported 00:11:45.606 Abort (08h): Supported 00:11:45.606 Set Features (09h): Supported 00:11:45.606 Get Features (0Ah): Supported 00:11:45.606 Asynchronous Event Request (0Ch): Supported 00:11:45.606 Namespace Attachment (15h): Supported NS-Inventory-Change 00:11:45.606 Directive Send (19h): Supported 00:11:45.606 Directive Receive (1Ah): Supported 00:11:45.606 Virtualization Management (1Ch): Supported 00:11:45.606 Doorbell Buffer Config (7Ch): Supported 00:11:45.606 Format NVM (80h): Supported LBA-Change 00:11:45.606 I/O Commands 00:11:45.606 ------------ 00:11:45.606 Flush (00h): Supported LBA-Change 00:11:45.606 Write (01h): Supported LBA-Change 00:11:45.606 Read (02h): Supported 00:11:45.606 Compare (05h): Supported 00:11:45.606 Write Zeroes (08h): Supported LBA-Change 00:11:45.606 Dataset Management (09h): Supported LBA-Change 00:11:45.606 Unknown (0Ch): Supported 00:11:45.606 Unknown (12h): Supported 00:11:45.606 Copy (19h): Supported LBA-Change 00:11:45.606 Unknown (1Dh): Supported LBA-Change 00:11:45.606 00:11:45.606 Error Log 00:11:45.606 ========= 00:11:45.606 00:11:45.606 Arbitration 00:11:45.606 =========== 00:11:45.606 Arbitration Burst: no limit 00:11:45.606 00:11:45.606 Power Management 00:11:45.606 ================ 00:11:45.606 Number of Power States: 1 00:11:45.606 Current Power State: Power State #0 00:11:45.606 Power State #0: 00:11:45.606 Max Power: 25.00 W 00:11:45.606 Non-Operational State: Operational 00:11:45.606 Entry Latency: 16 microseconds 00:11:45.606 Exit Latency: 4 microseconds 00:11:45.606 Relative Read Throughput: 0 00:11:45.606 Relative Read Latency: 0 00:11:45.606 Relative Write Throughput: 0 00:11:45.606 Relative Write Latency: 0 00:11:45.606 Idle Power: Not Reported 00:11:45.606 Active Power: Not Reported 00:11:45.606 Non-Operational Permissive Mode: Not Supported 00:11:45.606 00:11:45.606 Health Information 00:11:45.606 ================== 00:11:45.606 Critical Warnings: 00:11:45.606 Available Spare Space: OK 00:11:45.606 Temperature: OK 00:11:45.606 Device Reliability: OK 00:11:45.606 Read Only: No 00:11:45.606 Volatile Memory Backup: OK 00:11:45.606 Current Temperature: 323 Kelvin (50 Celsius) 00:11:45.606 Temperature Threshold: 343 Kelvin (70 Celsius) 00:11:45.606 Available Spare: 0% 00:11:45.606 Available Spare Threshold: 0% 00:11:45.606 Life Percentage Used: 0% 00:11:45.606 Data Units Read: 1016 00:11:45.606 Data Units Written: 945 00:11:45.606 Host Read Commands: 39338 00:11:45.606 Host Write Commands: 38761 00:11:45.606 Controller Busy Time: 0 minutes 00:11:45.606 Power Cycles: 0 00:11:45.606 Power On Hours: 0 hours 00:11:45.606 Unsafe Shutdowns: 0 00:11:45.606 Unrecoverable Media Errors: 0 00:11:45.606 Lifetime Error Log Entries: 0 00:11:45.606 Warning Temperature Time: 0 minutes 00:11:45.606 Critical Temperature Time: 0 minutes 00:11:45.606 00:11:45.606 Number of Queues 00:11:45.606 ================ 00:11:45.606 Number of I/O Submission Queues: 64 00:11:45.606 Number of I/O Completion Queues: 64 00:11:45.606 00:11:45.606 ZNS Specific Controller Data 00:11:45.606 ============================ 00:11:45.606 Zone Append Size Limit: 0 00:11:45.606 00:11:45.606 00:11:45.606 Active Namespaces 00:11:45.606 ================= 00:11:45.606 Namespace ID:1 00:11:45.606 Error Recovery Timeout: Unlimited 00:11:45.606 Command Set Identifier: NVM (00h) 00:11:45.606 Deallocate: Supported 00:11:45.606 Deallocated/Unwritten Error: Supported 00:11:45.606 Deallocated Read Value: All 0x00 00:11:45.606 Deallocate in Write Zeroes: Not Supported 00:11:45.606 Deallocated Guard Field: 0xFFFF 00:11:45.606 Flush: Supported 00:11:45.606 Reservation: Not Supported 00:11:45.606 Namespace Sharing Capabilities: Multiple Controllers 00:11:45.606 Size (in LBAs): 262144 (1GiB) 00:11:45.606 Capacity (in LBAs): 262144 (1GiB) 00:11:45.606 Utilization (in LBAs): 262144 (1GiB) 00:11:45.606 Thin Provisioning: Not Supported 00:11:45.606 Per-NS Atomic Units: No 00:11:45.606 Maximum Single Source Range Length: 128 00:11:45.606 Maximum Copy Length: 128 00:11:45.606 Maximum Source Range Count: 128 00:11:45.606 NGUID/EUI64 Never Reused: No 00:11:45.606 Namespace Write Protected: No 00:11:45.606 Endurance group ID: 1 00:11:45.606 Number of LBA Formats: 8 00:11:45.606 Current LBA Format: LBA Format #04 00:11:45.606 LBA Format #00: Data Size: 512 Metadata Size: 0 00:11:45.606 LBA Format #01: Data Size: 512 Metadata Size: 8 00:11:45.606 LBA Format #02: Data Size: 512 Metadata Size: 16 00:11:45.606 LBA Format #03: Data Size: 512 Metadata Size: 64 00:11:45.606 LBA Format #04: Data Size: 4096 Metadata Size: 0 00:11:45.606 LBA Format #05: Data Size: 4096 Metadata Size: 8 00:11:45.606 LBA Format #06: Data Size: 4096 Metadata Size: 16 00:11:45.606 LBA Format #07: Data Size: 4096 Metadata Size: 64 00:11:45.606 00:11:45.606 Get Feature FDP: 00:11:45.606 ================ 00:11:45.606 Enabled: Yes 00:11:45.606 FDP configuration index: 0 00:11:45.606 00:11:45.606 FDP configurations log page 00:11:45.606 =========================== 00:11:45.606 Number of FDP configurations: 1 00:11:45.606 Version: 0 00:11:45.606 Size: 112 00:11:45.606 FDP Configuration Descriptor: 0 00:11:45.606 Descriptor Size: 96 00:11:45.606 Reclaim Group Identifier format: 2 00:11:45.606 FDP Volatile Write Cache: Not Present 00:11:45.606 FDP Configuration: Valid 00:11:45.606 Vendor Specific Size: 0 00:11:45.606 Number of Reclaim Groups: 2 00:11:45.606 Number of Recalim Unit Handles: 8 00:11:45.606 Max Placement Identifiers: 128 00:11:45.606 Number of Namespaces Suppprted: 256 00:11:45.606 Reclaim unit Nominal Size: 6000000 bytes 00:11:45.606 Estimated Reclaim Unit Time Limit: Not Reported 00:11:45.606 RUH Desc #000: RUH Type: Initially Isolated 00:11:45.606 RUH Desc #001: RUH Type: Initially Isolated 00:11:45.606 RUH Desc #002: RUH Type: Initially Isolated 00:11:45.606 RUH Desc #003: RUH Type: Initially Isolated 00:11:45.606 RUH Desc #004: RUH Type: Initially Isolated 00:11:45.606 RUH Desc #005: RUH Type: Initially Isolated 00:11:45.606 RUH Desc #006: RUH Type: Initially Isolated 00:11:45.606 RUH Desc #007: RUH Type: Initially Isolated 00:11:45.606 00:11:45.606 FDP reclaim unit handle usage log page 00:11:45.606 ====================================== 00:11:45.606 Number of Reclaim Unit Handles: 8 00:11:45.606 RUH Usage Desc #000: RUH Attributes: Controller Specified 00:11:45.606 RUH Usage Desc #001: RUH Attributes: Unused 00:11:45.606 RUH Usage Desc #002: RUH Attributes: Unused 00:11:45.606 RUH Usage Desc #003: RUH Attributes: Unused 00:11:45.606 RUH Usage Desc #004: RUH Attributes: Unused 00:11:45.606 RUH Usage Desc #005: RUH Attributes: Unused 00:11:45.606 RUH Usage Desc #006: RUH Attributes: Unused 00:11:45.606 RUH Usage Desc #007: RUH Attributes: Unused 00:11:45.606 00:11:45.606 FDP statistics log page 00:11:45.606 ======================= 00:11:45.606 Host bytes with metadata written: 586850304 00:11:45.606 Me[2024-12-06 18:09:56.091323] nvme_ctrlr.c:3641:nvme_ctrlr_remove_inactive_proc: *ERROR*: [0000:00:12.0, 0] process 64136 terminated unexpected 00:11:45.606 dia bytes with metadata written: 589090816 00:11:45.606 Media bytes erased: 0 00:11:45.606 00:11:45.606 FDP events log page 00:11:45.606 =================== 00:11:45.606 Number of FDP events: 0 00:11:45.606 00:11:45.606 NVM Specific Namespace Data 00:11:45.606 =========================== 00:11:45.606 Logical Block Storage Tag Mask: 0 00:11:45.606 Protection Information Capabilities: 00:11:45.606 16b Guard Protection Information Storage Tag Support: No 00:11:45.606 16b Guard Protection Information Storage Tag Mask: Any bit in LBSTM can be 0 00:11:45.606 Storage Tag Check Read Support: No 00:11:45.606 Extended LBA Format #00: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:11:45.606 Extended LBA Format #01: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:11:45.606 Extended LBA Format #02: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:11:45.606 Extended LBA Format #03: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:11:45.606 Extended LBA Format #04: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:11:45.606 Extended LBA Format #05: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:11:45.606 Extended LBA Format #06: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:11:45.606 Extended LBA Format #07: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:11:45.606 ===================================================== 00:11:45.606 NVMe Controller at 0000:00:12.0 [1b36:0010] 00:11:45.606 ===================================================== 00:11:45.606 Controller Capabilities/Features 00:11:45.606 ================================ 00:11:45.606 Vendor ID: 1b36 00:11:45.606 Subsystem Vendor ID: 1af4 00:11:45.606 Serial Number: 12342 00:11:45.606 Model Number: QEMU NVMe Ctrl 00:11:45.606 Firmware Version: 8.0.0 00:11:45.606 Recommended Arb Burst: 6 00:11:45.606 IEEE OUI Identifier: 00 54 52 00:11:45.606 Multi-path I/O 00:11:45.606 May have multiple subsystem ports: No 00:11:45.606 May have multiple controllers: No 00:11:45.606 Associated with SR-IOV VF: No 00:11:45.606 Max Data Transfer Size: 524288 00:11:45.606 Max Number of Namespaces: 256 00:11:45.606 Max Number of I/O Queues: 64 00:11:45.606 NVMe Specification Version (VS): 1.4 00:11:45.606 NVMe Specification Version (Identify): 1.4 00:11:45.606 Maximum Queue Entries: 2048 00:11:45.606 Contiguous Queues Required: Yes 00:11:45.606 Arbitration Mechanisms Supported 00:11:45.606 Weighted Round Robin: Not Supported 00:11:45.606 Vendor Specific: Not Supported 00:11:45.607 Reset Timeout: 7500 ms 00:11:45.607 Doorbell Stride: 4 bytes 00:11:45.607 NVM Subsystem Reset: Not Supported 00:11:45.607 Command Sets Supported 00:11:45.607 NVM Command Set: Supported 00:11:45.607 Boot Partition: Not Supported 00:11:45.607 Memory Page Size Minimum: 4096 bytes 00:11:45.607 Memory Page Size Maximum: 65536 bytes 00:11:45.607 Persistent Memory Region: Not Supported 00:11:45.607 Optional Asynchronous Events Supported 00:11:45.607 Namespace Attribute Notices: Supported 00:11:45.607 Firmware Activation Notices: Not Supported 00:11:45.607 ANA Change Notices: Not Supported 00:11:45.607 PLE Aggregate Log Change Notices: Not Supported 00:11:45.607 LBA Status Info Alert Notices: Not Supported 00:11:45.607 EGE Aggregate Log Change Notices: Not Supported 00:11:45.607 Normal NVM Subsystem Shutdown event: Not Supported 00:11:45.607 Zone Descriptor Change Notices: Not Supported 00:11:45.607 Discovery Log Change Notices: Not Supported 00:11:45.607 Controller Attributes 00:11:45.607 128-bit Host Identifier: Not Supported 00:11:45.607 Non-Operational Permissive Mode: Not Supported 00:11:45.607 NVM Sets: Not Supported 00:11:45.607 Read Recovery Levels: Not Supported 00:11:45.607 Endurance Groups: Not Supported 00:11:45.607 Predictable Latency Mode: Not Supported 00:11:45.607 Traffic Based Keep ALive: Not Supported 00:11:45.607 Namespace Granularity: Not Supported 00:11:45.607 SQ Associations: Not Supported 00:11:45.607 UUID List: Not Supported 00:11:45.607 Multi-Domain Subsystem: Not Supported 00:11:45.607 Fixed Capacity Management: Not Supported 00:11:45.607 Variable Capacity Management: Not Supported 00:11:45.607 Delete Endurance Group: Not Supported 00:11:45.607 Delete NVM Set: Not Supported 00:11:45.607 Extended LBA Formats Supported: Supported 00:11:45.607 Flexible Data Placement Supported: Not Supported 00:11:45.607 00:11:45.607 Controller Memory Buffer Support 00:11:45.607 ================================ 00:11:45.607 Supported: No 00:11:45.607 00:11:45.607 Persistent Memory Region Support 00:11:45.607 ================================ 00:11:45.607 Supported: No 00:11:45.607 00:11:45.607 Admin Command Set Attributes 00:11:45.607 ============================ 00:11:45.607 Security Send/Receive: Not Supported 00:11:45.607 Format NVM: Supported 00:11:45.607 Firmware Activate/Download: Not Supported 00:11:45.607 Namespace Management: Supported 00:11:45.607 Device Self-Test: Not Supported 00:11:45.607 Directives: Supported 00:11:45.607 NVMe-MI: Not Supported 00:11:45.607 Virtualization Management: Not Supported 00:11:45.607 Doorbell Buffer Config: Supported 00:11:45.607 Get LBA Status Capability: Not Supported 00:11:45.607 Command & Feature Lockdown Capability: Not Supported 00:11:45.607 Abort Command Limit: 4 00:11:45.607 Async Event Request Limit: 4 00:11:45.607 Number of Firmware Slots: N/A 00:11:45.607 Firmware Slot 1 Read-Only: N/A 00:11:45.607 Firmware Activation Without Reset: N/A 00:11:45.607 Multiple Update Detection Support: N/A 00:11:45.607 Firmware Update Granularity: No Information Provided 00:11:45.607 Per-Namespace SMART Log: Yes 00:11:45.607 Asymmetric Namespace Access Log Page: Not Supported 00:11:45.607 Subsystem NQN: nqn.2019-08.org.qemu:12342 00:11:45.607 Command Effects Log Page: Supported 00:11:45.607 Get Log Page Extended Data: Supported 00:11:45.607 Telemetry Log Pages: Not Supported 00:11:45.607 Persistent Event Log Pages: Not Supported 00:11:45.607 Supported Log Pages Log Page: May Support 00:11:45.607 Commands Supported & Effects Log Page: Not Supported 00:11:45.607 Feature Identifiers & Effects Log Page:May Support 00:11:45.607 NVMe-MI Commands & Effects Log Page: May Support 00:11:45.607 Data Area 4 for Telemetry Log: Not Supported 00:11:45.607 Error Log Page Entries Supported: 1 00:11:45.607 Keep Alive: Not Supported 00:11:45.607 00:11:45.607 NVM Command Set Attributes 00:11:45.607 ========================== 00:11:45.607 Submission Queue Entry Size 00:11:45.607 Max: 64 00:11:45.607 Min: 64 00:11:45.607 Completion Queue Entry Size 00:11:45.607 Max: 16 00:11:45.607 Min: 16 00:11:45.607 Number of Namespaces: 256 00:11:45.607 Compare Command: Supported 00:11:45.607 Write Uncorrectable Command: Not Supported 00:11:45.607 Dataset Management Command: Supported 00:11:45.607 Write Zeroes Command: Supported 00:11:45.607 Set Features Save Field: Supported 00:11:45.607 Reservations: Not Supported 00:11:45.607 Timestamp: Supported 00:11:45.607 Copy: Supported 00:11:45.607 Volatile Write Cache: Present 00:11:45.607 Atomic Write Unit (Normal): 1 00:11:45.607 Atomic Write Unit (PFail): 1 00:11:45.607 Atomic Compare & Write Unit: 1 00:11:45.607 Fused Compare & Write: Not Supported 00:11:45.607 Scatter-Gather List 00:11:45.607 SGL Command Set: Supported 00:11:45.607 SGL Keyed: Not Supported 00:11:45.607 SGL Bit Bucket Descriptor: Not Supported 00:11:45.607 SGL Metadata Pointer: Not Supported 00:11:45.607 Oversized SGL: Not Supported 00:11:45.607 SGL Metadata Address: Not Supported 00:11:45.607 SGL Offset: Not Supported 00:11:45.607 Transport SGL Data Block: Not Supported 00:11:45.607 Replay Protected Memory Block: Not Supported 00:11:45.607 00:11:45.607 Firmware Slot Information 00:11:45.607 ========================= 00:11:45.607 Active slot: 1 00:11:45.607 Slot 1 Firmware Revision: 1.0 00:11:45.607 00:11:45.607 00:11:45.607 Commands Supported and Effects 00:11:45.607 ============================== 00:11:45.607 Admin Commands 00:11:45.607 -------------- 00:11:45.607 Delete I/O Submission Queue (00h): Supported 00:11:45.607 Create I/O Submission Queue (01h): Supported 00:11:45.607 Get Log Page (02h): Supported 00:11:45.607 Delete I/O Completion Queue (04h): Supported 00:11:45.607 Create I/O Completion Queue (05h): Supported 00:11:45.607 Identify (06h): Supported 00:11:45.607 Abort (08h): Supported 00:11:45.607 Set Features (09h): Supported 00:11:45.607 Get Features (0Ah): Supported 00:11:45.607 Asynchronous Event Request (0Ch): Supported 00:11:45.607 Namespace Attachment (15h): Supported NS-Inventory-Change 00:11:45.607 Directive Send (19h): Supported 00:11:45.607 Directive Receive (1Ah): Supported 00:11:45.607 Virtualization Management (1Ch): Supported 00:11:45.607 Doorbell Buffer Config (7Ch): Supported 00:11:45.607 Format NVM (80h): Supported LBA-Change 00:11:45.607 I/O Commands 00:11:45.607 ------------ 00:11:45.607 Flush (00h): Supported LBA-Change 00:11:45.607 Write (01h): Supported LBA-Change 00:11:45.607 Read (02h): Supported 00:11:45.607 Compare (05h): Supported 00:11:45.607 Write Zeroes (08h): Supported LBA-Change 00:11:45.607 Dataset Management (09h): Supported LBA-Change 00:11:45.607 Unknown (0Ch): Supported 00:11:45.607 Unknown (12h): Supported 00:11:45.607 Copy (19h): Supported LBA-Change 00:11:45.607 Unknown (1Dh): Supported LBA-Change 00:11:45.607 00:11:45.607 Error Log 00:11:45.607 ========= 00:11:45.607 00:11:45.607 Arbitration 00:11:45.607 =========== 00:11:45.607 Arbitration Burst: no limit 00:11:45.607 00:11:45.607 Power Management 00:11:45.607 ================ 00:11:45.607 Number of Power States: 1 00:11:45.607 Current Power State: Power State #0 00:11:45.607 Power State #0: 00:11:45.607 Max Power: 25.00 W 00:11:45.607 Non-Operational State: Operational 00:11:45.607 Entry Latency: 16 microseconds 00:11:45.607 Exit Latency: 4 microseconds 00:11:45.607 Relative Read Throughput: 0 00:11:45.607 Relative Read Latency: 0 00:11:45.607 Relative Write Throughput: 0 00:11:45.607 Relative Write Latency: 0 00:11:45.607 Idle Power: Not Reported 00:11:45.607 Active Power: Not Reported 00:11:45.607 Non-Operational Permissive Mode: Not Supported 00:11:45.607 00:11:45.607 Health Information 00:11:45.607 ================== 00:11:45.607 Critical Warnings: 00:11:45.607 Available Spare Space: OK 00:11:45.607 Temperature: OK 00:11:45.607 Device Reliability: OK 00:11:45.607 Read Only: No 00:11:45.607 Volatile Memory Backup: OK 00:11:45.607 Current Temperature: 323 Kelvin (50 Celsius) 00:11:45.607 Temperature Threshold: 343 Kelvin (70 Celsius) 00:11:45.607 Available Spare: 0% 00:11:45.607 Available Spare Threshold: 0% 00:11:45.607 Life Percentage Used: 0% 00:11:45.607 Data Units Read: 2547 00:11:45.607 Data Units Written: 2334 00:11:45.607 Host Read Commands: 113920 00:11:45.607 Host Write Commands: 112190 00:11:45.607 Controller Busy Time: 0 minutes 00:11:45.607 Power Cycles: 0 00:11:45.607 Power On Hours: 0 hours 00:11:45.607 Unsafe Shutdowns: 0 00:11:45.607 Unrecoverable Media Errors: 0 00:11:45.607 Lifetime Error Log Entries: 0 00:11:45.607 Warning Temperature Time: 0 minutes 00:11:45.607 Critical Temperature Time: 0 minutes 00:11:45.607 00:11:45.607 Number of Queues 00:11:45.607 ================ 00:11:45.607 Number of I/O Submission Queues: 64 00:11:45.607 Number of I/O Completion Queues: 64 00:11:45.607 00:11:45.607 ZNS Specific Controller Data 00:11:45.607 ============================ 00:11:45.607 Zone Append Size Limit: 0 00:11:45.607 00:11:45.607 00:11:45.607 Active Namespaces 00:11:45.607 ================= 00:11:45.607 Namespace ID:1 00:11:45.607 Error Recovery Timeout: Unlimited 00:11:45.607 Command Set Identifier: NVM (00h) 00:11:45.607 Deallocate: Supported 00:11:45.607 Deallocated/Unwritten Error: Supported 00:11:45.608 Deallocated Read Value: All 0x00 00:11:45.608 Deallocate in Write Zeroes: Not Supported 00:11:45.608 Deallocated Guard Field: 0xFFFF 00:11:45.608 Flush: Supported 00:11:45.608 Reservation: Not Supported 00:11:45.608 Namespace Sharing Capabilities: Private 00:11:45.608 Size (in LBAs): 1048576 (4GiB) 00:11:45.608 Capacity (in LBAs): 1048576 (4GiB) 00:11:45.608 Utilization (in LBAs): 1048576 (4GiB) 00:11:45.608 Thin Provisioning: Not Supported 00:11:45.608 Per-NS Atomic Units: No 00:11:45.608 Maximum Single Source Range Length: 128 00:11:45.608 Maximum Copy Length: 128 00:11:45.608 Maximum Source Range Count: 128 00:11:45.608 NGUID/EUI64 Never Reused: No 00:11:45.608 Namespace Write Protected: No 00:11:45.608 Number of LBA Formats: 8 00:11:45.608 Current LBA Format: LBA Format #04 00:11:45.608 LBA Format #00: Data Size: 512 Metadata Size: 0 00:11:45.608 LBA Format #01: Data Size: 512 Metadata Size: 8 00:11:45.608 LBA Format #02: Data Size: 512 Metadata Size: 16 00:11:45.608 LBA Format #03: Data Size: 512 Metadata Size: 64 00:11:45.608 LBA Format #04: Data Size: 4096 Metadata Size: 0 00:11:45.608 LBA Format #05: Data Size: 4096 Metadata Size: 8 00:11:45.608 LBA Format #06: Data Size: 4096 Metadata Size: 16 00:11:45.608 LBA Format #07: Data Size: 4096 Metadata Size: 64 00:11:45.608 00:11:45.608 NVM Specific Namespace Data 00:11:45.608 =========================== 00:11:45.608 Logical Block Storage Tag Mask: 0 00:11:45.608 Protection Information Capabilities: 00:11:45.608 16b Guard Protection Information Storage Tag Support: No 00:11:45.608 16b Guard Protection Information Storage Tag Mask: Any bit in LBSTM can be 0 00:11:45.608 Storage Tag Check Read Support: No 00:11:45.608 Extended LBA Format #00: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:11:45.608 Extended LBA Format #01: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:11:45.608 Extended LBA Format #02: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:11:45.608 Extended LBA Format #03: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:11:45.608 Extended LBA Format #04: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:11:45.608 Extended LBA Format #05: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:11:45.608 Extended LBA Format #06: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:11:45.608 Extended LBA Format #07: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:11:45.608 Namespace ID:2 00:11:45.608 Error Recovery Timeout: Unlimited 00:11:45.608 Command Set Identifier: NVM (00h) 00:11:45.608 Deallocate: Supported 00:11:45.608 Deallocated/Unwritten Error: Supported 00:11:45.608 Deallocated Read Value: All 0x00 00:11:45.608 Deallocate in Write Zeroes: Not Supported 00:11:45.608 Deallocated Guard Field: 0xFFFF 00:11:45.608 Flush: Supported 00:11:45.608 Reservation: Not Supported 00:11:45.608 Namespace Sharing Capabilities: Private 00:11:45.608 Size (in LBAs): 1048576 (4GiB) 00:11:45.608 Capacity (in LBAs): 1048576 (4GiB) 00:11:45.608 Utilization (in LBAs): 1048576 (4GiB) 00:11:45.608 Thin Provisioning: Not Supported 00:11:45.608 Per-NS Atomic Units: No 00:11:45.608 Maximum Single Source Range Length: 128 00:11:45.608 Maximum Copy Length: 128 00:11:45.608 Maximum Source Range Count: 128 00:11:45.608 NGUID/EUI64 Never Reused: No 00:11:45.608 Namespace Write Protected: No 00:11:45.608 Number of LBA Formats: 8 00:11:45.608 Current LBA Format: LBA Format #04 00:11:45.608 LBA Format #00: Data Size: 512 Metadata Size: 0 00:11:45.608 LBA Format #01: Data Size: 512 Metadata Size: 8 00:11:45.608 LBA Format #02: Data Size: 512 Metadata Size: 16 00:11:45.608 LBA Format #03: Data Size: 512 Metadata Size: 64 00:11:45.608 LBA Format #04: Data Size: 4096 Metadata Size: 0 00:11:45.608 LBA Format #05: Data Size: 4096 Metadata Size: 8 00:11:45.608 LBA Format #06: Data Size: 4096 Metadata Size: 16 00:11:45.608 LBA Format #07: Data Size: 4096 Metadata Size: 64 00:11:45.608 00:11:45.608 NVM Specific Namespace Data 00:11:45.608 =========================== 00:11:45.608 Logical Block Storage Tag Mask: 0 00:11:45.608 Protection Information Capabilities: 00:11:45.608 16b Guard Protection Information Storage Tag Support: No 00:11:45.608 16b Guard Protection Information Storage Tag Mask: Any bit in LBSTM can be 0 00:11:45.608 Storage Tag Check Read Support: No 00:11:45.608 Extended LBA Format #00: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:11:45.608 Extended LBA Format #01: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:11:45.608 Extended LBA Format #02: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:11:45.608 Extended LBA Format #03: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:11:45.608 Extended LBA Format #04: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:11:45.608 Extended LBA Format #05: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:11:45.608 Extended LBA Format #06: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:11:45.608 Extended LBA Format #07: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:11:45.608 Namespace ID:3 00:11:45.608 Error Recovery Timeout: Unlimited 00:11:45.608 Command Set Identifier: NVM (00h) 00:11:45.608 Deallocate: Supported 00:11:45.608 Deallocated/Unwritten Error: Supported 00:11:45.608 Deallocated Read Value: All 0x00 00:11:45.608 Deallocate in Write Zeroes: Not Supported 00:11:45.608 Deallocated Guard Field: 0xFFFF 00:11:45.608 Flush: Supported 00:11:45.608 Reservation: Not Supported 00:11:45.608 Namespace Sharing Capabilities: Private 00:11:45.608 Size (in LBAs): 1048576 (4GiB) 00:11:45.608 Capacity (in LBAs): 1048576 (4GiB) 00:11:45.608 Utilization (in LBAs): 1048576 (4GiB) 00:11:45.608 Thin Provisioning: Not Supported 00:11:45.608 Per-NS Atomic Units: No 00:11:45.608 Maximum Single Source Range Length: 128 00:11:45.608 Maximum Copy Length: 128 00:11:45.608 Maximum Source Range Count: 128 00:11:45.608 NGUID/EUI64 Never Reused: No 00:11:45.608 Namespace Write Protected: No 00:11:45.608 Number of LBA Formats: 8 00:11:45.608 Current LBA Format: LBA Format #04 00:11:45.608 LBA Format #00: Data Size: 512 Metadata Size: 0 00:11:45.608 LBA Format #01: Data Size: 512 Metadata Size: 8 00:11:45.608 LBA Format #02: Data Size: 512 Metadata Size: 16 00:11:45.608 LBA Format #03: Data Size: 512 Metadata Size: 64 00:11:45.608 LBA Format #04: Data Size: 4096 Metadata Size: 0 00:11:45.608 LBA Format #05: Data Size: 4096 Metadata Size: 8 00:11:45.608 LBA Format #06: Data Size: 4096 Metadata Size: 16 00:11:45.608 LBA Format #07: Data Size: 4096 Metadata Size: 64 00:11:45.608 00:11:45.608 NVM Specific Namespace Data 00:11:45.608 =========================== 00:11:45.608 Logical Block Storage Tag Mask: 0 00:11:45.608 Protection Information Capabilities: 00:11:45.608 16b Guard Protection Information Storage Tag Support: No 00:11:45.608 16b Guard Protection Information Storage Tag Mask: Any bit in LBSTM can be 0 00:11:45.608 Storage Tag Check Read Support: No 00:11:45.608 Extended LBA Format #00: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:11:45.608 Extended LBA Format #01: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:11:45.608 Extended LBA Format #02: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:11:45.608 Extended LBA Format #03: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:11:45.608 Extended LBA Format #04: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:11:45.608 Extended LBA Format #05: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:11:45.608 Extended LBA Format #06: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:11:45.608 Extended LBA Format #07: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:11:45.608 18:09:56 nvme.nvme_identify -- nvme/nvme.sh@15 -- # for bdf in "${bdfs[@]}" 00:11:45.608 18:09:56 nvme.nvme_identify -- nvme/nvme.sh@16 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_identify -r 'trtype:PCIe traddr:0000:00:10.0' -i 0 00:11:45.867 ===================================================== 00:11:45.867 NVMe Controller at 0000:00:10.0 [1b36:0010] 00:11:45.867 ===================================================== 00:11:45.867 Controller Capabilities/Features 00:11:45.867 ================================ 00:11:45.867 Vendor ID: 1b36 00:11:45.867 Subsystem Vendor ID: 1af4 00:11:45.867 Serial Number: 12340 00:11:45.867 Model Number: QEMU NVMe Ctrl 00:11:45.867 Firmware Version: 8.0.0 00:11:45.867 Recommended Arb Burst: 6 00:11:45.867 IEEE OUI Identifier: 00 54 52 00:11:45.867 Multi-path I/O 00:11:45.867 May have multiple subsystem ports: No 00:11:45.867 May have multiple controllers: No 00:11:45.867 Associated with SR-IOV VF: No 00:11:45.867 Max Data Transfer Size: 524288 00:11:45.867 Max Number of Namespaces: 256 00:11:45.867 Max Number of I/O Queues: 64 00:11:45.867 NVMe Specification Version (VS): 1.4 00:11:45.867 NVMe Specification Version (Identify): 1.4 00:11:45.867 Maximum Queue Entries: 2048 00:11:45.867 Contiguous Queues Required: Yes 00:11:45.867 Arbitration Mechanisms Supported 00:11:45.867 Weighted Round Robin: Not Supported 00:11:45.867 Vendor Specific: Not Supported 00:11:45.867 Reset Timeout: 7500 ms 00:11:45.867 Doorbell Stride: 4 bytes 00:11:45.867 NVM Subsystem Reset: Not Supported 00:11:45.867 Command Sets Supported 00:11:45.867 NVM Command Set: Supported 00:11:45.867 Boot Partition: Not Supported 00:11:45.867 Memory Page Size Minimum: 4096 bytes 00:11:45.867 Memory Page Size Maximum: 65536 bytes 00:11:45.867 Persistent Memory Region: Not Supported 00:11:45.867 Optional Asynchronous Events Supported 00:11:45.867 Namespace Attribute Notices: Supported 00:11:45.867 Firmware Activation Notices: Not Supported 00:11:45.867 ANA Change Notices: Not Supported 00:11:45.867 PLE Aggregate Log Change Notices: Not Supported 00:11:45.867 LBA Status Info Alert Notices: Not Supported 00:11:45.867 EGE Aggregate Log Change Notices: Not Supported 00:11:45.867 Normal NVM Subsystem Shutdown event: Not Supported 00:11:45.867 Zone Descriptor Change Notices: Not Supported 00:11:45.867 Discovery Log Change Notices: Not Supported 00:11:45.867 Controller Attributes 00:11:45.867 128-bit Host Identifier: Not Supported 00:11:45.867 Non-Operational Permissive Mode: Not Supported 00:11:45.867 NVM Sets: Not Supported 00:11:45.867 Read Recovery Levels: Not Supported 00:11:45.867 Endurance Groups: Not Supported 00:11:45.867 Predictable Latency Mode: Not Supported 00:11:45.867 Traffic Based Keep ALive: Not Supported 00:11:45.867 Namespace Granularity: Not Supported 00:11:45.867 SQ Associations: Not Supported 00:11:45.867 UUID List: Not Supported 00:11:45.868 Multi-Domain Subsystem: Not Supported 00:11:45.868 Fixed Capacity Management: Not Supported 00:11:45.868 Variable Capacity Management: Not Supported 00:11:45.868 Delete Endurance Group: Not Supported 00:11:45.868 Delete NVM Set: Not Supported 00:11:45.868 Extended LBA Formats Supported: Supported 00:11:45.868 Flexible Data Placement Supported: Not Supported 00:11:45.868 00:11:45.868 Controller Memory Buffer Support 00:11:45.868 ================================ 00:11:45.868 Supported: No 00:11:45.868 00:11:45.868 Persistent Memory Region Support 00:11:45.868 ================================ 00:11:45.868 Supported: No 00:11:45.868 00:11:45.868 Admin Command Set Attributes 00:11:45.868 ============================ 00:11:45.868 Security Send/Receive: Not Supported 00:11:45.868 Format NVM: Supported 00:11:45.868 Firmware Activate/Download: Not Supported 00:11:45.868 Namespace Management: Supported 00:11:45.868 Device Self-Test: Not Supported 00:11:45.868 Directives: Supported 00:11:45.868 NVMe-MI: Not Supported 00:11:45.868 Virtualization Management: Not Supported 00:11:45.868 Doorbell Buffer Config: Supported 00:11:45.868 Get LBA Status Capability: Not Supported 00:11:45.868 Command & Feature Lockdown Capability: Not Supported 00:11:45.868 Abort Command Limit: 4 00:11:45.868 Async Event Request Limit: 4 00:11:45.868 Number of Firmware Slots: N/A 00:11:45.868 Firmware Slot 1 Read-Only: N/A 00:11:45.868 Firmware Activation Without Reset: N/A 00:11:45.868 Multiple Update Detection Support: N/A 00:11:45.868 Firmware Update Granularity: No Information Provided 00:11:45.868 Per-Namespace SMART Log: Yes 00:11:45.868 Asymmetric Namespace Access Log Page: Not Supported 00:11:45.868 Subsystem NQN: nqn.2019-08.org.qemu:12340 00:11:45.868 Command Effects Log Page: Supported 00:11:45.868 Get Log Page Extended Data: Supported 00:11:45.868 Telemetry Log Pages: Not Supported 00:11:45.868 Persistent Event Log Pages: Not Supported 00:11:45.868 Supported Log Pages Log Page: May Support 00:11:45.868 Commands Supported & Effects Log Page: Not Supported 00:11:45.868 Feature Identifiers & Effects Log Page:May Support 00:11:45.868 NVMe-MI Commands & Effects Log Page: May Support 00:11:45.868 Data Area 4 for Telemetry Log: Not Supported 00:11:45.868 Error Log Page Entries Supported: 1 00:11:45.868 Keep Alive: Not Supported 00:11:45.868 00:11:45.868 NVM Command Set Attributes 00:11:45.868 ========================== 00:11:45.868 Submission Queue Entry Size 00:11:45.868 Max: 64 00:11:45.868 Min: 64 00:11:45.868 Completion Queue Entry Size 00:11:45.868 Max: 16 00:11:45.868 Min: 16 00:11:45.868 Number of Namespaces: 256 00:11:45.868 Compare Command: Supported 00:11:45.868 Write Uncorrectable Command: Not Supported 00:11:45.868 Dataset Management Command: Supported 00:11:45.868 Write Zeroes Command: Supported 00:11:45.868 Set Features Save Field: Supported 00:11:45.868 Reservations: Not Supported 00:11:45.868 Timestamp: Supported 00:11:45.868 Copy: Supported 00:11:45.868 Volatile Write Cache: Present 00:11:45.868 Atomic Write Unit (Normal): 1 00:11:45.868 Atomic Write Unit (PFail): 1 00:11:45.868 Atomic Compare & Write Unit: 1 00:11:45.868 Fused Compare & Write: Not Supported 00:11:45.868 Scatter-Gather List 00:11:45.868 SGL Command Set: Supported 00:11:45.868 SGL Keyed: Not Supported 00:11:45.868 SGL Bit Bucket Descriptor: Not Supported 00:11:45.868 SGL Metadata Pointer: Not Supported 00:11:45.868 Oversized SGL: Not Supported 00:11:45.868 SGL Metadata Address: Not Supported 00:11:45.868 SGL Offset: Not Supported 00:11:45.868 Transport SGL Data Block: Not Supported 00:11:45.868 Replay Protected Memory Block: Not Supported 00:11:45.868 00:11:45.868 Firmware Slot Information 00:11:45.868 ========================= 00:11:45.868 Active slot: 1 00:11:45.868 Slot 1 Firmware Revision: 1.0 00:11:45.868 00:11:45.868 00:11:45.868 Commands Supported and Effects 00:11:45.868 ============================== 00:11:45.868 Admin Commands 00:11:45.868 -------------- 00:11:45.868 Delete I/O Submission Queue (00h): Supported 00:11:45.868 Create I/O Submission Queue (01h): Supported 00:11:45.868 Get Log Page (02h): Supported 00:11:45.868 Delete I/O Completion Queue (04h): Supported 00:11:45.868 Create I/O Completion Queue (05h): Supported 00:11:45.868 Identify (06h): Supported 00:11:45.868 Abort (08h): Supported 00:11:45.868 Set Features (09h): Supported 00:11:45.868 Get Features (0Ah): Supported 00:11:45.868 Asynchronous Event Request (0Ch): Supported 00:11:45.868 Namespace Attachment (15h): Supported NS-Inventory-Change 00:11:45.868 Directive Send (19h): Supported 00:11:45.868 Directive Receive (1Ah): Supported 00:11:45.868 Virtualization Management (1Ch): Supported 00:11:45.868 Doorbell Buffer Config (7Ch): Supported 00:11:45.868 Format NVM (80h): Supported LBA-Change 00:11:45.868 I/O Commands 00:11:45.868 ------------ 00:11:45.868 Flush (00h): Supported LBA-Change 00:11:45.868 Write (01h): Supported LBA-Change 00:11:45.868 Read (02h): Supported 00:11:45.868 Compare (05h): Supported 00:11:45.868 Write Zeroes (08h): Supported LBA-Change 00:11:45.868 Dataset Management (09h): Supported LBA-Change 00:11:45.868 Unknown (0Ch): Supported 00:11:45.868 Unknown (12h): Supported 00:11:45.868 Copy (19h): Supported LBA-Change 00:11:45.868 Unknown (1Dh): Supported LBA-Change 00:11:45.868 00:11:45.868 Error Log 00:11:45.868 ========= 00:11:45.868 00:11:45.868 Arbitration 00:11:45.868 =========== 00:11:45.868 Arbitration Burst: no limit 00:11:45.868 00:11:45.868 Power Management 00:11:45.868 ================ 00:11:45.868 Number of Power States: 1 00:11:45.868 Current Power State: Power State #0 00:11:45.868 Power State #0: 00:11:45.868 Max Power: 25.00 W 00:11:45.868 Non-Operational State: Operational 00:11:45.868 Entry Latency: 16 microseconds 00:11:45.868 Exit Latency: 4 microseconds 00:11:45.868 Relative Read Throughput: 0 00:11:45.868 Relative Read Latency: 0 00:11:45.868 Relative Write Throughput: 0 00:11:45.868 Relative Write Latency: 0 00:11:46.127 Idle Power: Not Reported 00:11:46.127 Active Power: Not Reported 00:11:46.127 Non-Operational Permissive Mode: Not Supported 00:11:46.127 00:11:46.127 Health Information 00:11:46.127 ================== 00:11:46.127 Critical Warnings: 00:11:46.127 Available Spare Space: OK 00:11:46.127 Temperature: OK 00:11:46.127 Device Reliability: OK 00:11:46.127 Read Only: No 00:11:46.127 Volatile Memory Backup: OK 00:11:46.127 Current Temperature: 323 Kelvin (50 Celsius) 00:11:46.127 Temperature Threshold: 343 Kelvin (70 Celsius) 00:11:46.127 Available Spare: 0% 00:11:46.127 Available Spare Threshold: 0% 00:11:46.127 Life Percentage Used: 0% 00:11:46.127 Data Units Read: 794 00:11:46.127 Data Units Written: 722 00:11:46.127 Host Read Commands: 37180 00:11:46.127 Host Write Commands: 36966 00:11:46.127 Controller Busy Time: 0 minutes 00:11:46.127 Power Cycles: 0 00:11:46.127 Power On Hours: 0 hours 00:11:46.127 Unsafe Shutdowns: 0 00:11:46.127 Unrecoverable Media Errors: 0 00:11:46.127 Lifetime Error Log Entries: 0 00:11:46.127 Warning Temperature Time: 0 minutes 00:11:46.127 Critical Temperature Time: 0 minutes 00:11:46.127 00:11:46.127 Number of Queues 00:11:46.127 ================ 00:11:46.127 Number of I/O Submission Queues: 64 00:11:46.127 Number of I/O Completion Queues: 64 00:11:46.127 00:11:46.127 ZNS Specific Controller Data 00:11:46.127 ============================ 00:11:46.127 Zone Append Size Limit: 0 00:11:46.127 00:11:46.127 00:11:46.127 Active Namespaces 00:11:46.127 ================= 00:11:46.127 Namespace ID:1 00:11:46.127 Error Recovery Timeout: Unlimited 00:11:46.127 Command Set Identifier: NVM (00h) 00:11:46.127 Deallocate: Supported 00:11:46.127 Deallocated/Unwritten Error: Supported 00:11:46.127 Deallocated Read Value: All 0x00 00:11:46.127 Deallocate in Write Zeroes: Not Supported 00:11:46.127 Deallocated Guard Field: 0xFFFF 00:11:46.127 Flush: Supported 00:11:46.127 Reservation: Not Supported 00:11:46.127 Metadata Transferred as: Separate Metadata Buffer 00:11:46.127 Namespace Sharing Capabilities: Private 00:11:46.127 Size (in LBAs): 1548666 (5GiB) 00:11:46.127 Capacity (in LBAs): 1548666 (5GiB) 00:11:46.127 Utilization (in LBAs): 1548666 (5GiB) 00:11:46.127 Thin Provisioning: Not Supported 00:11:46.127 Per-NS Atomic Units: No 00:11:46.127 Maximum Single Source Range Length: 128 00:11:46.127 Maximum Copy Length: 128 00:11:46.127 Maximum Source Range Count: 128 00:11:46.127 NGUID/EUI64 Never Reused: No 00:11:46.127 Namespace Write Protected: No 00:11:46.127 Number of LBA Formats: 8 00:11:46.127 Current LBA Format: LBA Format #07 00:11:46.127 LBA Format #00: Data Size: 512 Metadata Size: 0 00:11:46.127 LBA Format #01: Data Size: 512 Metadata Size: 8 00:11:46.127 LBA Format #02: Data Size: 512 Metadata Size: 16 00:11:46.127 LBA Format #03: Data Size: 512 Metadata Size: 64 00:11:46.127 LBA Format #04: Data Size: 4096 Metadata Size: 0 00:11:46.127 LBA Format #05: Data Size: 4096 Metadata Size: 8 00:11:46.127 LBA Format #06: Data Size: 4096 Metadata Size: 16 00:11:46.127 LBA Format #07: Data Size: 4096 Metadata Size: 64 00:11:46.127 00:11:46.127 NVM Specific Namespace Data 00:11:46.127 =========================== 00:11:46.127 Logical Block Storage Tag Mask: 0 00:11:46.127 Protection Information Capabilities: 00:11:46.127 16b Guard Protection Information Storage Tag Support: No 00:11:46.127 16b Guard Protection Information Storage Tag Mask: Any bit in LBSTM can be 0 00:11:46.127 Storage Tag Check Read Support: No 00:11:46.127 Extended LBA Format #00: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:11:46.127 Extended LBA Format #01: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:11:46.127 Extended LBA Format #02: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:11:46.127 Extended LBA Format #03: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:11:46.127 Extended LBA Format #04: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:11:46.127 Extended LBA Format #05: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:11:46.127 Extended LBA Format #06: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:11:46.127 Extended LBA Format #07: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:11:46.127 18:09:56 nvme.nvme_identify -- nvme/nvme.sh@15 -- # for bdf in "${bdfs[@]}" 00:11:46.128 18:09:56 nvme.nvme_identify -- nvme/nvme.sh@16 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_identify -r 'trtype:PCIe traddr:0000:00:11.0' -i 0 00:11:46.387 ===================================================== 00:11:46.387 NVMe Controller at 0000:00:11.0 [1b36:0010] 00:11:46.387 ===================================================== 00:11:46.387 Controller Capabilities/Features 00:11:46.387 ================================ 00:11:46.387 Vendor ID: 1b36 00:11:46.387 Subsystem Vendor ID: 1af4 00:11:46.387 Serial Number: 12341 00:11:46.387 Model Number: QEMU NVMe Ctrl 00:11:46.387 Firmware Version: 8.0.0 00:11:46.387 Recommended Arb Burst: 6 00:11:46.387 IEEE OUI Identifier: 00 54 52 00:11:46.387 Multi-path I/O 00:11:46.387 May have multiple subsystem ports: No 00:11:46.387 May have multiple controllers: No 00:11:46.387 Associated with SR-IOV VF: No 00:11:46.387 Max Data Transfer Size: 524288 00:11:46.387 Max Number of Namespaces: 256 00:11:46.387 Max Number of I/O Queues: 64 00:11:46.387 NVMe Specification Version (VS): 1.4 00:11:46.387 NVMe Specification Version (Identify): 1.4 00:11:46.387 Maximum Queue Entries: 2048 00:11:46.387 Contiguous Queues Required: Yes 00:11:46.387 Arbitration Mechanisms Supported 00:11:46.387 Weighted Round Robin: Not Supported 00:11:46.387 Vendor Specific: Not Supported 00:11:46.387 Reset Timeout: 7500 ms 00:11:46.387 Doorbell Stride: 4 bytes 00:11:46.387 NVM Subsystem Reset: Not Supported 00:11:46.387 Command Sets Supported 00:11:46.387 NVM Command Set: Supported 00:11:46.387 Boot Partition: Not Supported 00:11:46.387 Memory Page Size Minimum: 4096 bytes 00:11:46.387 Memory Page Size Maximum: 65536 bytes 00:11:46.387 Persistent Memory Region: Not Supported 00:11:46.387 Optional Asynchronous Events Supported 00:11:46.387 Namespace Attribute Notices: Supported 00:11:46.387 Firmware Activation Notices: Not Supported 00:11:46.387 ANA Change Notices: Not Supported 00:11:46.387 PLE Aggregate Log Change Notices: Not Supported 00:11:46.387 LBA Status Info Alert Notices: Not Supported 00:11:46.387 EGE Aggregate Log Change Notices: Not Supported 00:11:46.387 Normal NVM Subsystem Shutdown event: Not Supported 00:11:46.387 Zone Descriptor Change Notices: Not Supported 00:11:46.387 Discovery Log Change Notices: Not Supported 00:11:46.387 Controller Attributes 00:11:46.387 128-bit Host Identifier: Not Supported 00:11:46.387 Non-Operational Permissive Mode: Not Supported 00:11:46.387 NVM Sets: Not Supported 00:11:46.387 Read Recovery Levels: Not Supported 00:11:46.387 Endurance Groups: Not Supported 00:11:46.387 Predictable Latency Mode: Not Supported 00:11:46.387 Traffic Based Keep ALive: Not Supported 00:11:46.387 Namespace Granularity: Not Supported 00:11:46.387 SQ Associations: Not Supported 00:11:46.387 UUID List: Not Supported 00:11:46.387 Multi-Domain Subsystem: Not Supported 00:11:46.387 Fixed Capacity Management: Not Supported 00:11:46.387 Variable Capacity Management: Not Supported 00:11:46.387 Delete Endurance Group: Not Supported 00:11:46.387 Delete NVM Set: Not Supported 00:11:46.387 Extended LBA Formats Supported: Supported 00:11:46.387 Flexible Data Placement Supported: Not Supported 00:11:46.387 00:11:46.387 Controller Memory Buffer Support 00:11:46.387 ================================ 00:11:46.387 Supported: No 00:11:46.387 00:11:46.387 Persistent Memory Region Support 00:11:46.387 ================================ 00:11:46.387 Supported: No 00:11:46.387 00:11:46.387 Admin Command Set Attributes 00:11:46.387 ============================ 00:11:46.387 Security Send/Receive: Not Supported 00:11:46.387 Format NVM: Supported 00:11:46.387 Firmware Activate/Download: Not Supported 00:11:46.387 Namespace Management: Supported 00:11:46.387 Device Self-Test: Not Supported 00:11:46.387 Directives: Supported 00:11:46.388 NVMe-MI: Not Supported 00:11:46.388 Virtualization Management: Not Supported 00:11:46.388 Doorbell Buffer Config: Supported 00:11:46.388 Get LBA Status Capability: Not Supported 00:11:46.388 Command & Feature Lockdown Capability: Not Supported 00:11:46.388 Abort Command Limit: 4 00:11:46.388 Async Event Request Limit: 4 00:11:46.388 Number of Firmware Slots: N/A 00:11:46.388 Firmware Slot 1 Read-Only: N/A 00:11:46.388 Firmware Activation Without Reset: N/A 00:11:46.388 Multiple Update Detection Support: N/A 00:11:46.388 Firmware Update Granularity: No Information Provided 00:11:46.388 Per-Namespace SMART Log: Yes 00:11:46.388 Asymmetric Namespace Access Log Page: Not Supported 00:11:46.388 Subsystem NQN: nqn.2019-08.org.qemu:12341 00:11:46.388 Command Effects Log Page: Supported 00:11:46.388 Get Log Page Extended Data: Supported 00:11:46.388 Telemetry Log Pages: Not Supported 00:11:46.388 Persistent Event Log Pages: Not Supported 00:11:46.388 Supported Log Pages Log Page: May Support 00:11:46.388 Commands Supported & Effects Log Page: Not Supported 00:11:46.388 Feature Identifiers & Effects Log Page:May Support 00:11:46.388 NVMe-MI Commands & Effects Log Page: May Support 00:11:46.388 Data Area 4 for Telemetry Log: Not Supported 00:11:46.388 Error Log Page Entries Supported: 1 00:11:46.388 Keep Alive: Not Supported 00:11:46.388 00:11:46.388 NVM Command Set Attributes 00:11:46.388 ========================== 00:11:46.388 Submission Queue Entry Size 00:11:46.388 Max: 64 00:11:46.388 Min: 64 00:11:46.388 Completion Queue Entry Size 00:11:46.388 Max: 16 00:11:46.388 Min: 16 00:11:46.388 Number of Namespaces: 256 00:11:46.388 Compare Command: Supported 00:11:46.388 Write Uncorrectable Command: Not Supported 00:11:46.388 Dataset Management Command: Supported 00:11:46.388 Write Zeroes Command: Supported 00:11:46.388 Set Features Save Field: Supported 00:11:46.388 Reservations: Not Supported 00:11:46.388 Timestamp: Supported 00:11:46.388 Copy: Supported 00:11:46.388 Volatile Write Cache: Present 00:11:46.388 Atomic Write Unit (Normal): 1 00:11:46.388 Atomic Write Unit (PFail): 1 00:11:46.388 Atomic Compare & Write Unit: 1 00:11:46.388 Fused Compare & Write: Not Supported 00:11:46.388 Scatter-Gather List 00:11:46.388 SGL Command Set: Supported 00:11:46.388 SGL Keyed: Not Supported 00:11:46.388 SGL Bit Bucket Descriptor: Not Supported 00:11:46.388 SGL Metadata Pointer: Not Supported 00:11:46.388 Oversized SGL: Not Supported 00:11:46.388 SGL Metadata Address: Not Supported 00:11:46.388 SGL Offset: Not Supported 00:11:46.388 Transport SGL Data Block: Not Supported 00:11:46.388 Replay Protected Memory Block: Not Supported 00:11:46.388 00:11:46.388 Firmware Slot Information 00:11:46.388 ========================= 00:11:46.388 Active slot: 1 00:11:46.388 Slot 1 Firmware Revision: 1.0 00:11:46.388 00:11:46.388 00:11:46.388 Commands Supported and Effects 00:11:46.388 ============================== 00:11:46.388 Admin Commands 00:11:46.388 -------------- 00:11:46.388 Delete I/O Submission Queue (00h): Supported 00:11:46.388 Create I/O Submission Queue (01h): Supported 00:11:46.388 Get Log Page (02h): Supported 00:11:46.388 Delete I/O Completion Queue (04h): Supported 00:11:46.388 Create I/O Completion Queue (05h): Supported 00:11:46.388 Identify (06h): Supported 00:11:46.388 Abort (08h): Supported 00:11:46.388 Set Features (09h): Supported 00:11:46.388 Get Features (0Ah): Supported 00:11:46.388 Asynchronous Event Request (0Ch): Supported 00:11:46.388 Namespace Attachment (15h): Supported NS-Inventory-Change 00:11:46.388 Directive Send (19h): Supported 00:11:46.388 Directive Receive (1Ah): Supported 00:11:46.388 Virtualization Management (1Ch): Supported 00:11:46.388 Doorbell Buffer Config (7Ch): Supported 00:11:46.388 Format NVM (80h): Supported LBA-Change 00:11:46.388 I/O Commands 00:11:46.388 ------------ 00:11:46.388 Flush (00h): Supported LBA-Change 00:11:46.388 Write (01h): Supported LBA-Change 00:11:46.388 Read (02h): Supported 00:11:46.388 Compare (05h): Supported 00:11:46.388 Write Zeroes (08h): Supported LBA-Change 00:11:46.388 Dataset Management (09h): Supported LBA-Change 00:11:46.388 Unknown (0Ch): Supported 00:11:46.388 Unknown (12h): Supported 00:11:46.388 Copy (19h): Supported LBA-Change 00:11:46.388 Unknown (1Dh): Supported LBA-Change 00:11:46.388 00:11:46.388 Error Log 00:11:46.388 ========= 00:11:46.388 00:11:46.388 Arbitration 00:11:46.388 =========== 00:11:46.388 Arbitration Burst: no limit 00:11:46.388 00:11:46.388 Power Management 00:11:46.388 ================ 00:11:46.388 Number of Power States: 1 00:11:46.388 Current Power State: Power State #0 00:11:46.388 Power State #0: 00:11:46.388 Max Power: 25.00 W 00:11:46.388 Non-Operational State: Operational 00:11:46.388 Entry Latency: 16 microseconds 00:11:46.388 Exit Latency: 4 microseconds 00:11:46.388 Relative Read Throughput: 0 00:11:46.388 Relative Read Latency: 0 00:11:46.388 Relative Write Throughput: 0 00:11:46.388 Relative Write Latency: 0 00:11:46.388 Idle Power: Not Reported 00:11:46.388 Active Power: Not Reported 00:11:46.388 Non-Operational Permissive Mode: Not Supported 00:11:46.388 00:11:46.388 Health Information 00:11:46.388 ================== 00:11:46.388 Critical Warnings: 00:11:46.388 Available Spare Space: OK 00:11:46.388 Temperature: OK 00:11:46.388 Device Reliability: OK 00:11:46.388 Read Only: No 00:11:46.388 Volatile Memory Backup: OK 00:11:46.388 Current Temperature: 323 Kelvin (50 Celsius) 00:11:46.388 Temperature Threshold: 343 Kelvin (70 Celsius) 00:11:46.388 Available Spare: 0% 00:11:46.388 Available Spare Threshold: 0% 00:11:46.388 Life Percentage Used: 0% 00:11:46.388 Data Units Read: 1197 00:11:46.388 Data Units Written: 1064 00:11:46.388 Host Read Commands: 55067 00:11:46.388 Host Write Commands: 53859 00:11:46.388 Controller Busy Time: 0 minutes 00:11:46.388 Power Cycles: 0 00:11:46.388 Power On Hours: 0 hours 00:11:46.388 Unsafe Shutdowns: 0 00:11:46.388 Unrecoverable Media Errors: 0 00:11:46.388 Lifetime Error Log Entries: 0 00:11:46.388 Warning Temperature Time: 0 minutes 00:11:46.388 Critical Temperature Time: 0 minutes 00:11:46.388 00:11:46.388 Number of Queues 00:11:46.388 ================ 00:11:46.388 Number of I/O Submission Queues: 64 00:11:46.388 Number of I/O Completion Queues: 64 00:11:46.388 00:11:46.388 ZNS Specific Controller Data 00:11:46.388 ============================ 00:11:46.388 Zone Append Size Limit: 0 00:11:46.388 00:11:46.388 00:11:46.388 Active Namespaces 00:11:46.388 ================= 00:11:46.388 Namespace ID:1 00:11:46.388 Error Recovery Timeout: Unlimited 00:11:46.388 Command Set Identifier: NVM (00h) 00:11:46.388 Deallocate: Supported 00:11:46.388 Deallocated/Unwritten Error: Supported 00:11:46.388 Deallocated Read Value: All 0x00 00:11:46.388 Deallocate in Write Zeroes: Not Supported 00:11:46.388 Deallocated Guard Field: 0xFFFF 00:11:46.388 Flush: Supported 00:11:46.388 Reservation: Not Supported 00:11:46.388 Namespace Sharing Capabilities: Private 00:11:46.388 Size (in LBAs): 1310720 (5GiB) 00:11:46.388 Capacity (in LBAs): 1310720 (5GiB) 00:11:46.388 Utilization (in LBAs): 1310720 (5GiB) 00:11:46.388 Thin Provisioning: Not Supported 00:11:46.388 Per-NS Atomic Units: No 00:11:46.388 Maximum Single Source Range Length: 128 00:11:46.388 Maximum Copy Length: 128 00:11:46.388 Maximum Source Range Count: 128 00:11:46.388 NGUID/EUI64 Never Reused: No 00:11:46.388 Namespace Write Protected: No 00:11:46.388 Number of LBA Formats: 8 00:11:46.388 Current LBA Format: LBA Format #04 00:11:46.388 LBA Format #00: Data Size: 512 Metadata Size: 0 00:11:46.388 LBA Format #01: Data Size: 512 Metadata Size: 8 00:11:46.388 LBA Format #02: Data Size: 512 Metadata Size: 16 00:11:46.388 LBA Format #03: Data Size: 512 Metadata Size: 64 00:11:46.388 LBA Format #04: Data Size: 4096 Metadata Size: 0 00:11:46.388 LBA Format #05: Data Size: 4096 Metadata Size: 8 00:11:46.388 LBA Format #06: Data Size: 4096 Metadata Size: 16 00:11:46.388 LBA Format #07: Data Size: 4096 Metadata Size: 64 00:11:46.388 00:11:46.388 NVM Specific Namespace Data 00:11:46.388 =========================== 00:11:46.388 Logical Block Storage Tag Mask: 0 00:11:46.388 Protection Information Capabilities: 00:11:46.388 16b Guard Protection Information Storage Tag Support: No 00:11:46.388 16b Guard Protection Information Storage Tag Mask: Any bit in LBSTM can be 0 00:11:46.388 Storage Tag Check Read Support: No 00:11:46.388 Extended LBA Format #00: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:11:46.388 Extended LBA Format #01: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:11:46.388 Extended LBA Format #02: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:11:46.388 Extended LBA Format #03: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:11:46.389 Extended LBA Format #04: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:11:46.389 Extended LBA Format #05: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:11:46.389 Extended LBA Format #06: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:11:46.389 Extended LBA Format #07: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:11:46.389 18:09:56 nvme.nvme_identify -- nvme/nvme.sh@15 -- # for bdf in "${bdfs[@]}" 00:11:46.389 18:09:56 nvme.nvme_identify -- nvme/nvme.sh@16 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_identify -r 'trtype:PCIe traddr:0000:00:12.0' -i 0 00:11:46.648 ===================================================== 00:11:46.648 NVMe Controller at 0000:00:12.0 [1b36:0010] 00:11:46.648 ===================================================== 00:11:46.648 Controller Capabilities/Features 00:11:46.648 ================================ 00:11:46.648 Vendor ID: 1b36 00:11:46.648 Subsystem Vendor ID: 1af4 00:11:46.648 Serial Number: 12342 00:11:46.648 Model Number: QEMU NVMe Ctrl 00:11:46.648 Firmware Version: 8.0.0 00:11:46.648 Recommended Arb Burst: 6 00:11:46.648 IEEE OUI Identifier: 00 54 52 00:11:46.648 Multi-path I/O 00:11:46.648 May have multiple subsystem ports: No 00:11:46.648 May have multiple controllers: No 00:11:46.648 Associated with SR-IOV VF: No 00:11:46.648 Max Data Transfer Size: 524288 00:11:46.648 Max Number of Namespaces: 256 00:11:46.648 Max Number of I/O Queues: 64 00:11:46.648 NVMe Specification Version (VS): 1.4 00:11:46.648 NVMe Specification Version (Identify): 1.4 00:11:46.648 Maximum Queue Entries: 2048 00:11:46.648 Contiguous Queues Required: Yes 00:11:46.648 Arbitration Mechanisms Supported 00:11:46.648 Weighted Round Robin: Not Supported 00:11:46.648 Vendor Specific: Not Supported 00:11:46.648 Reset Timeout: 7500 ms 00:11:46.648 Doorbell Stride: 4 bytes 00:11:46.648 NVM Subsystem Reset: Not Supported 00:11:46.648 Command Sets Supported 00:11:46.648 NVM Command Set: Supported 00:11:46.648 Boot Partition: Not Supported 00:11:46.648 Memory Page Size Minimum: 4096 bytes 00:11:46.648 Memory Page Size Maximum: 65536 bytes 00:11:46.648 Persistent Memory Region: Not Supported 00:11:46.648 Optional Asynchronous Events Supported 00:11:46.648 Namespace Attribute Notices: Supported 00:11:46.648 Firmware Activation Notices: Not Supported 00:11:46.648 ANA Change Notices: Not Supported 00:11:46.648 PLE Aggregate Log Change Notices: Not Supported 00:11:46.648 LBA Status Info Alert Notices: Not Supported 00:11:46.648 EGE Aggregate Log Change Notices: Not Supported 00:11:46.648 Normal NVM Subsystem Shutdown event: Not Supported 00:11:46.648 Zone Descriptor Change Notices: Not Supported 00:11:46.648 Discovery Log Change Notices: Not Supported 00:11:46.648 Controller Attributes 00:11:46.648 128-bit Host Identifier: Not Supported 00:11:46.648 Non-Operational Permissive Mode: Not Supported 00:11:46.648 NVM Sets: Not Supported 00:11:46.648 Read Recovery Levels: Not Supported 00:11:46.648 Endurance Groups: Not Supported 00:11:46.648 Predictable Latency Mode: Not Supported 00:11:46.648 Traffic Based Keep ALive: Not Supported 00:11:46.648 Namespace Granularity: Not Supported 00:11:46.648 SQ Associations: Not Supported 00:11:46.648 UUID List: Not Supported 00:11:46.648 Multi-Domain Subsystem: Not Supported 00:11:46.648 Fixed Capacity Management: Not Supported 00:11:46.648 Variable Capacity Management: Not Supported 00:11:46.648 Delete Endurance Group: Not Supported 00:11:46.648 Delete NVM Set: Not Supported 00:11:46.648 Extended LBA Formats Supported: Supported 00:11:46.648 Flexible Data Placement Supported: Not Supported 00:11:46.648 00:11:46.648 Controller Memory Buffer Support 00:11:46.649 ================================ 00:11:46.649 Supported: No 00:11:46.649 00:11:46.649 Persistent Memory Region Support 00:11:46.649 ================================ 00:11:46.649 Supported: No 00:11:46.649 00:11:46.649 Admin Command Set Attributes 00:11:46.649 ============================ 00:11:46.649 Security Send/Receive: Not Supported 00:11:46.649 Format NVM: Supported 00:11:46.649 Firmware Activate/Download: Not Supported 00:11:46.649 Namespace Management: Supported 00:11:46.649 Device Self-Test: Not Supported 00:11:46.649 Directives: Supported 00:11:46.649 NVMe-MI: Not Supported 00:11:46.649 Virtualization Management: Not Supported 00:11:46.649 Doorbell Buffer Config: Supported 00:11:46.649 Get LBA Status Capability: Not Supported 00:11:46.649 Command & Feature Lockdown Capability: Not Supported 00:11:46.649 Abort Command Limit: 4 00:11:46.649 Async Event Request Limit: 4 00:11:46.649 Number of Firmware Slots: N/A 00:11:46.649 Firmware Slot 1 Read-Only: N/A 00:11:46.649 Firmware Activation Without Reset: N/A 00:11:46.649 Multiple Update Detection Support: N/A 00:11:46.649 Firmware Update Granularity: No Information Provided 00:11:46.649 Per-Namespace SMART Log: Yes 00:11:46.649 Asymmetric Namespace Access Log Page: Not Supported 00:11:46.649 Subsystem NQN: nqn.2019-08.org.qemu:12342 00:11:46.649 Command Effects Log Page: Supported 00:11:46.649 Get Log Page Extended Data: Supported 00:11:46.649 Telemetry Log Pages: Not Supported 00:11:46.649 Persistent Event Log Pages: Not Supported 00:11:46.649 Supported Log Pages Log Page: May Support 00:11:46.649 Commands Supported & Effects Log Page: Not Supported 00:11:46.649 Feature Identifiers & Effects Log Page:May Support 00:11:46.649 NVMe-MI Commands & Effects Log Page: May Support 00:11:46.649 Data Area 4 for Telemetry Log: Not Supported 00:11:46.649 Error Log Page Entries Supported: 1 00:11:46.649 Keep Alive: Not Supported 00:11:46.649 00:11:46.649 NVM Command Set Attributes 00:11:46.649 ========================== 00:11:46.649 Submission Queue Entry Size 00:11:46.649 Max: 64 00:11:46.649 Min: 64 00:11:46.649 Completion Queue Entry Size 00:11:46.649 Max: 16 00:11:46.649 Min: 16 00:11:46.649 Number of Namespaces: 256 00:11:46.649 Compare Command: Supported 00:11:46.649 Write Uncorrectable Command: Not Supported 00:11:46.649 Dataset Management Command: Supported 00:11:46.649 Write Zeroes Command: Supported 00:11:46.649 Set Features Save Field: Supported 00:11:46.649 Reservations: Not Supported 00:11:46.649 Timestamp: Supported 00:11:46.649 Copy: Supported 00:11:46.649 Volatile Write Cache: Present 00:11:46.649 Atomic Write Unit (Normal): 1 00:11:46.649 Atomic Write Unit (PFail): 1 00:11:46.649 Atomic Compare & Write Unit: 1 00:11:46.649 Fused Compare & Write: Not Supported 00:11:46.649 Scatter-Gather List 00:11:46.649 SGL Command Set: Supported 00:11:46.649 SGL Keyed: Not Supported 00:11:46.649 SGL Bit Bucket Descriptor: Not Supported 00:11:46.649 SGL Metadata Pointer: Not Supported 00:11:46.649 Oversized SGL: Not Supported 00:11:46.649 SGL Metadata Address: Not Supported 00:11:46.649 SGL Offset: Not Supported 00:11:46.649 Transport SGL Data Block: Not Supported 00:11:46.649 Replay Protected Memory Block: Not Supported 00:11:46.649 00:11:46.649 Firmware Slot Information 00:11:46.649 ========================= 00:11:46.649 Active slot: 1 00:11:46.649 Slot 1 Firmware Revision: 1.0 00:11:46.649 00:11:46.649 00:11:46.649 Commands Supported and Effects 00:11:46.649 ============================== 00:11:46.649 Admin Commands 00:11:46.649 -------------- 00:11:46.649 Delete I/O Submission Queue (00h): Supported 00:11:46.649 Create I/O Submission Queue (01h): Supported 00:11:46.649 Get Log Page (02h): Supported 00:11:46.649 Delete I/O Completion Queue (04h): Supported 00:11:46.649 Create I/O Completion Queue (05h): Supported 00:11:46.649 Identify (06h): Supported 00:11:46.649 Abort (08h): Supported 00:11:46.649 Set Features (09h): Supported 00:11:46.649 Get Features (0Ah): Supported 00:11:46.649 Asynchronous Event Request (0Ch): Supported 00:11:46.649 Namespace Attachment (15h): Supported NS-Inventory-Change 00:11:46.649 Directive Send (19h): Supported 00:11:46.649 Directive Receive (1Ah): Supported 00:11:46.649 Virtualization Management (1Ch): Supported 00:11:46.649 Doorbell Buffer Config (7Ch): Supported 00:11:46.649 Format NVM (80h): Supported LBA-Change 00:11:46.649 I/O Commands 00:11:46.649 ------------ 00:11:46.649 Flush (00h): Supported LBA-Change 00:11:46.649 Write (01h): Supported LBA-Change 00:11:46.649 Read (02h): Supported 00:11:46.649 Compare (05h): Supported 00:11:46.649 Write Zeroes (08h): Supported LBA-Change 00:11:46.649 Dataset Management (09h): Supported LBA-Change 00:11:46.649 Unknown (0Ch): Supported 00:11:46.649 Unknown (12h): Supported 00:11:46.649 Copy (19h): Supported LBA-Change 00:11:46.649 Unknown (1Dh): Supported LBA-Change 00:11:46.649 00:11:46.649 Error Log 00:11:46.649 ========= 00:11:46.649 00:11:46.649 Arbitration 00:11:46.649 =========== 00:11:46.649 Arbitration Burst: no limit 00:11:46.649 00:11:46.649 Power Management 00:11:46.649 ================ 00:11:46.649 Number of Power States: 1 00:11:46.649 Current Power State: Power State #0 00:11:46.649 Power State #0: 00:11:46.649 Max Power: 25.00 W 00:11:46.649 Non-Operational State: Operational 00:11:46.649 Entry Latency: 16 microseconds 00:11:46.649 Exit Latency: 4 microseconds 00:11:46.649 Relative Read Throughput: 0 00:11:46.649 Relative Read Latency: 0 00:11:46.649 Relative Write Throughput: 0 00:11:46.649 Relative Write Latency: 0 00:11:46.649 Idle Power: Not Reported 00:11:46.649 Active Power: Not Reported 00:11:46.649 Non-Operational Permissive Mode: Not Supported 00:11:46.649 00:11:46.649 Health Information 00:11:46.649 ================== 00:11:46.649 Critical Warnings: 00:11:46.649 Available Spare Space: OK 00:11:46.649 Temperature: OK 00:11:46.649 Device Reliability: OK 00:11:46.649 Read Only: No 00:11:46.649 Volatile Memory Backup: OK 00:11:46.649 Current Temperature: 323 Kelvin (50 Celsius) 00:11:46.649 Temperature Threshold: 343 Kelvin (70 Celsius) 00:11:46.649 Available Spare: 0% 00:11:46.649 Available Spare Threshold: 0% 00:11:46.649 Life Percentage Used: 0% 00:11:46.649 Data Units Read: 2547 00:11:46.649 Data Units Written: 2334 00:11:46.649 Host Read Commands: 113920 00:11:46.649 Host Write Commands: 112190 00:11:46.649 Controller Busy Time: 0 minutes 00:11:46.649 Power Cycles: 0 00:11:46.649 Power On Hours: 0 hours 00:11:46.649 Unsafe Shutdowns: 0 00:11:46.649 Unrecoverable Media Errors: 0 00:11:46.649 Lifetime Error Log Entries: 0 00:11:46.649 Warning Temperature Time: 0 minutes 00:11:46.649 Critical Temperature Time: 0 minutes 00:11:46.649 00:11:46.649 Number of Queues 00:11:46.649 ================ 00:11:46.649 Number of I/O Submission Queues: 64 00:11:46.649 Number of I/O Completion Queues: 64 00:11:46.649 00:11:46.649 ZNS Specific Controller Data 00:11:46.649 ============================ 00:11:46.649 Zone Append Size Limit: 0 00:11:46.649 00:11:46.649 00:11:46.649 Active Namespaces 00:11:46.649 ================= 00:11:46.649 Namespace ID:1 00:11:46.649 Error Recovery Timeout: Unlimited 00:11:46.649 Command Set Identifier: NVM (00h) 00:11:46.649 Deallocate: Supported 00:11:46.649 Deallocated/Unwritten Error: Supported 00:11:46.649 Deallocated Read Value: All 0x00 00:11:46.649 Deallocate in Write Zeroes: Not Supported 00:11:46.649 Deallocated Guard Field: 0xFFFF 00:11:46.649 Flush: Supported 00:11:46.649 Reservation: Not Supported 00:11:46.649 Namespace Sharing Capabilities: Private 00:11:46.649 Size (in LBAs): 1048576 (4GiB) 00:11:46.649 Capacity (in LBAs): 1048576 (4GiB) 00:11:46.649 Utilization (in LBAs): 1048576 (4GiB) 00:11:46.649 Thin Provisioning: Not Supported 00:11:46.649 Per-NS Atomic Units: No 00:11:46.649 Maximum Single Source Range Length: 128 00:11:46.649 Maximum Copy Length: 128 00:11:46.649 Maximum Source Range Count: 128 00:11:46.649 NGUID/EUI64 Never Reused: No 00:11:46.649 Namespace Write Protected: No 00:11:46.649 Number of LBA Formats: 8 00:11:46.649 Current LBA Format: LBA Format #04 00:11:46.649 LBA Format #00: Data Size: 512 Metadata Size: 0 00:11:46.649 LBA Format #01: Data Size: 512 Metadata Size: 8 00:11:46.649 LBA Format #02: Data Size: 512 Metadata Size: 16 00:11:46.649 LBA Format #03: Data Size: 512 Metadata Size: 64 00:11:46.649 LBA Format #04: Data Size: 4096 Metadata Size: 0 00:11:46.649 LBA Format #05: Data Size: 4096 Metadata Size: 8 00:11:46.649 LBA Format #06: Data Size: 4096 Metadata Size: 16 00:11:46.649 LBA Format #07: Data Size: 4096 Metadata Size: 64 00:11:46.649 00:11:46.649 NVM Specific Namespace Data 00:11:46.649 =========================== 00:11:46.650 Logical Block Storage Tag Mask: 0 00:11:46.650 Protection Information Capabilities: 00:11:46.650 16b Guard Protection Information Storage Tag Support: No 00:11:46.650 16b Guard Protection Information Storage Tag Mask: Any bit in LBSTM can be 0 00:11:46.650 Storage Tag Check Read Support: No 00:11:46.650 Extended LBA Format #00: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:11:46.650 Extended LBA Format #01: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:11:46.650 Extended LBA Format #02: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:11:46.650 Extended LBA Format #03: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:11:46.650 Extended LBA Format #04: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:11:46.650 Extended LBA Format #05: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:11:46.650 Extended LBA Format #06: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:11:46.650 Extended LBA Format #07: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:11:46.650 Namespace ID:2 00:11:46.650 Error Recovery Timeout: Unlimited 00:11:46.650 Command Set Identifier: NVM (00h) 00:11:46.650 Deallocate: Supported 00:11:46.650 Deallocated/Unwritten Error: Supported 00:11:46.650 Deallocated Read Value: All 0x00 00:11:46.650 Deallocate in Write Zeroes: Not Supported 00:11:46.650 Deallocated Guard Field: 0xFFFF 00:11:46.650 Flush: Supported 00:11:46.650 Reservation: Not Supported 00:11:46.650 Namespace Sharing Capabilities: Private 00:11:46.650 Size (in LBAs): 1048576 (4GiB) 00:11:46.650 Capacity (in LBAs): 1048576 (4GiB) 00:11:46.650 Utilization (in LBAs): 1048576 (4GiB) 00:11:46.650 Thin Provisioning: Not Supported 00:11:46.650 Per-NS Atomic Units: No 00:11:46.650 Maximum Single Source Range Length: 128 00:11:46.650 Maximum Copy Length: 128 00:11:46.650 Maximum Source Range Count: 128 00:11:46.650 NGUID/EUI64 Never Reused: No 00:11:46.650 Namespace Write Protected: No 00:11:46.650 Number of LBA Formats: 8 00:11:46.650 Current LBA Format: LBA Format #04 00:11:46.650 LBA Format #00: Data Size: 512 Metadata Size: 0 00:11:46.650 LBA Format #01: Data Size: 512 Metadata Size: 8 00:11:46.650 LBA Format #02: Data Size: 512 Metadata Size: 16 00:11:46.650 LBA Format #03: Data Size: 512 Metadata Size: 64 00:11:46.650 LBA Format #04: Data Size: 4096 Metadata Size: 0 00:11:46.650 LBA Format #05: Data Size: 4096 Metadata Size: 8 00:11:46.650 LBA Format #06: Data Size: 4096 Metadata Size: 16 00:11:46.650 LBA Format #07: Data Size: 4096 Metadata Size: 64 00:11:46.650 00:11:46.650 NVM Specific Namespace Data 00:11:46.650 =========================== 00:11:46.650 Logical Block Storage Tag Mask: 0 00:11:46.650 Protection Information Capabilities: 00:11:46.650 16b Guard Protection Information Storage Tag Support: No 00:11:46.650 16b Guard Protection Information Storage Tag Mask: Any bit in LBSTM can be 0 00:11:46.650 Storage Tag Check Read Support: No 00:11:46.650 Extended LBA Format #00: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:11:46.650 Extended LBA Format #01: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:11:46.650 Extended LBA Format #02: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:11:46.650 Extended LBA Format #03: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:11:46.650 Extended LBA Format #04: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:11:46.650 Extended LBA Format #05: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:11:46.650 Extended LBA Format #06: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:11:46.650 Extended LBA Format #07: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:11:46.650 Namespace ID:3 00:11:46.650 Error Recovery Timeout: Unlimited 00:11:46.650 Command Set Identifier: NVM (00h) 00:11:46.650 Deallocate: Supported 00:11:46.650 Deallocated/Unwritten Error: Supported 00:11:46.650 Deallocated Read Value: All 0x00 00:11:46.650 Deallocate in Write Zeroes: Not Supported 00:11:46.650 Deallocated Guard Field: 0xFFFF 00:11:46.650 Flush: Supported 00:11:46.650 Reservation: Not Supported 00:11:46.650 Namespace Sharing Capabilities: Private 00:11:46.650 Size (in LBAs): 1048576 (4GiB) 00:11:46.650 Capacity (in LBAs): 1048576 (4GiB) 00:11:46.650 Utilization (in LBAs): 1048576 (4GiB) 00:11:46.650 Thin Provisioning: Not Supported 00:11:46.650 Per-NS Atomic Units: No 00:11:46.650 Maximum Single Source Range Length: 128 00:11:46.650 Maximum Copy Length: 128 00:11:46.650 Maximum Source Range Count: 128 00:11:46.650 NGUID/EUI64 Never Reused: No 00:11:46.650 Namespace Write Protected: No 00:11:46.650 Number of LBA Formats: 8 00:11:46.650 Current LBA Format: LBA Format #04 00:11:46.650 LBA Format #00: Data Size: 512 Metadata Size: 0 00:11:46.650 LBA Format #01: Data Size: 512 Metadata Size: 8 00:11:46.650 LBA Format #02: Data Size: 512 Metadata Size: 16 00:11:46.650 LBA Format #03: Data Size: 512 Metadata Size: 64 00:11:46.650 LBA Format #04: Data Size: 4096 Metadata Size: 0 00:11:46.650 LBA Format #05: Data Size: 4096 Metadata Size: 8 00:11:46.650 LBA Format #06: Data Size: 4096 Metadata Size: 16 00:11:46.650 LBA Format #07: Data Size: 4096 Metadata Size: 64 00:11:46.650 00:11:46.650 NVM Specific Namespace Data 00:11:46.650 =========================== 00:11:46.650 Logical Block Storage Tag Mask: 0 00:11:46.650 Protection Information Capabilities: 00:11:46.650 16b Guard Protection Information Storage Tag Support: No 00:11:46.650 16b Guard Protection Information Storage Tag Mask: Any bit in LBSTM can be 0 00:11:46.650 Storage Tag Check Read Support: No 00:11:46.650 Extended LBA Format #00: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:11:46.650 Extended LBA Format #01: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:11:46.650 Extended LBA Format #02: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:11:46.650 Extended LBA Format #03: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:11:46.650 Extended LBA Format #04: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:11:46.650 Extended LBA Format #05: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:11:46.650 Extended LBA Format #06: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:11:46.650 Extended LBA Format #07: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:11:46.650 18:09:57 nvme.nvme_identify -- nvme/nvme.sh@15 -- # for bdf in "${bdfs[@]}" 00:11:46.650 18:09:57 nvme.nvme_identify -- nvme/nvme.sh@16 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_identify -r 'trtype:PCIe traddr:0000:00:13.0' -i 0 00:11:46.911 ===================================================== 00:11:46.911 NVMe Controller at 0000:00:13.0 [1b36:0010] 00:11:46.911 ===================================================== 00:11:46.911 Controller Capabilities/Features 00:11:46.911 ================================ 00:11:46.911 Vendor ID: 1b36 00:11:46.911 Subsystem Vendor ID: 1af4 00:11:46.911 Serial Number: 12343 00:11:46.911 Model Number: QEMU NVMe Ctrl 00:11:46.911 Firmware Version: 8.0.0 00:11:46.911 Recommended Arb Burst: 6 00:11:46.911 IEEE OUI Identifier: 00 54 52 00:11:46.911 Multi-path I/O 00:11:46.911 May have multiple subsystem ports: No 00:11:46.911 May have multiple controllers: Yes 00:11:46.911 Associated with SR-IOV VF: No 00:11:46.911 Max Data Transfer Size: 524288 00:11:46.911 Max Number of Namespaces: 256 00:11:46.911 Max Number of I/O Queues: 64 00:11:46.911 NVMe Specification Version (VS): 1.4 00:11:46.911 NVMe Specification Version (Identify): 1.4 00:11:46.911 Maximum Queue Entries: 2048 00:11:46.911 Contiguous Queues Required: Yes 00:11:46.911 Arbitration Mechanisms Supported 00:11:46.911 Weighted Round Robin: Not Supported 00:11:46.911 Vendor Specific: Not Supported 00:11:46.911 Reset Timeout: 7500 ms 00:11:46.911 Doorbell Stride: 4 bytes 00:11:46.911 NVM Subsystem Reset: Not Supported 00:11:46.911 Command Sets Supported 00:11:46.911 NVM Command Set: Supported 00:11:46.911 Boot Partition: Not Supported 00:11:46.911 Memory Page Size Minimum: 4096 bytes 00:11:46.911 Memory Page Size Maximum: 65536 bytes 00:11:46.911 Persistent Memory Region: Not Supported 00:11:46.911 Optional Asynchronous Events Supported 00:11:46.911 Namespace Attribute Notices: Supported 00:11:46.911 Firmware Activation Notices: Not Supported 00:11:46.911 ANA Change Notices: Not Supported 00:11:46.911 PLE Aggregate Log Change Notices: Not Supported 00:11:46.911 LBA Status Info Alert Notices: Not Supported 00:11:46.911 EGE Aggregate Log Change Notices: Not Supported 00:11:46.911 Normal NVM Subsystem Shutdown event: Not Supported 00:11:46.911 Zone Descriptor Change Notices: Not Supported 00:11:46.911 Discovery Log Change Notices: Not Supported 00:11:46.911 Controller Attributes 00:11:46.911 128-bit Host Identifier: Not Supported 00:11:46.911 Non-Operational Permissive Mode: Not Supported 00:11:46.911 NVM Sets: Not Supported 00:11:46.911 Read Recovery Levels: Not Supported 00:11:46.911 Endurance Groups: Supported 00:11:46.911 Predictable Latency Mode: Not Supported 00:11:46.911 Traffic Based Keep ALive: Not Supported 00:11:46.911 Namespace Granularity: Not Supported 00:11:46.911 SQ Associations: Not Supported 00:11:46.911 UUID List: Not Supported 00:11:46.911 Multi-Domain Subsystem: Not Supported 00:11:46.911 Fixed Capacity Management: Not Supported 00:11:46.911 Variable Capacity Management: Not Supported 00:11:46.911 Delete Endurance Group: Not Supported 00:11:46.911 Delete NVM Set: Not Supported 00:11:46.911 Extended LBA Formats Supported: Supported 00:11:46.911 Flexible Data Placement Supported: Supported 00:11:46.911 00:11:46.911 Controller Memory Buffer Support 00:11:46.911 ================================ 00:11:46.911 Supported: No 00:11:46.911 00:11:46.911 Persistent Memory Region Support 00:11:46.911 ================================ 00:11:46.911 Supported: No 00:11:46.911 00:11:46.911 Admin Command Set Attributes 00:11:46.911 ============================ 00:11:46.911 Security Send/Receive: Not Supported 00:11:46.911 Format NVM: Supported 00:11:46.911 Firmware Activate/Download: Not Supported 00:11:46.911 Namespace Management: Supported 00:11:46.911 Device Self-Test: Not Supported 00:11:46.911 Directives: Supported 00:11:46.911 NVMe-MI: Not Supported 00:11:46.911 Virtualization Management: Not Supported 00:11:46.911 Doorbell Buffer Config: Supported 00:11:46.911 Get LBA Status Capability: Not Supported 00:11:46.911 Command & Feature Lockdown Capability: Not Supported 00:11:46.911 Abort Command Limit: 4 00:11:46.911 Async Event Request Limit: 4 00:11:46.911 Number of Firmware Slots: N/A 00:11:46.911 Firmware Slot 1 Read-Only: N/A 00:11:46.911 Firmware Activation Without Reset: N/A 00:11:46.911 Multiple Update Detection Support: N/A 00:11:46.911 Firmware Update Granularity: No Information Provided 00:11:46.911 Per-Namespace SMART Log: Yes 00:11:46.911 Asymmetric Namespace Access Log Page: Not Supported 00:11:46.911 Subsystem NQN: nqn.2019-08.org.qemu:fdp-subsys3 00:11:46.911 Command Effects Log Page: Supported 00:11:46.911 Get Log Page Extended Data: Supported 00:11:46.911 Telemetry Log Pages: Not Supported 00:11:46.911 Persistent Event Log Pages: Not Supported 00:11:46.911 Supported Log Pages Log Page: May Support 00:11:46.911 Commands Supported & Effects Log Page: Not Supported 00:11:46.911 Feature Identifiers & Effects Log Page:May Support 00:11:46.911 NVMe-MI Commands & Effects Log Page: May Support 00:11:46.911 Data Area 4 for Telemetry Log: Not Supported 00:11:46.911 Error Log Page Entries Supported: 1 00:11:46.911 Keep Alive: Not Supported 00:11:46.911 00:11:46.911 NVM Command Set Attributes 00:11:46.911 ========================== 00:11:46.911 Submission Queue Entry Size 00:11:46.911 Max: 64 00:11:46.911 Min: 64 00:11:46.911 Completion Queue Entry Size 00:11:46.911 Max: 16 00:11:46.911 Min: 16 00:11:46.911 Number of Namespaces: 256 00:11:46.911 Compare Command: Supported 00:11:46.911 Write Uncorrectable Command: Not Supported 00:11:46.911 Dataset Management Command: Supported 00:11:46.911 Write Zeroes Command: Supported 00:11:46.911 Set Features Save Field: Supported 00:11:46.911 Reservations: Not Supported 00:11:46.911 Timestamp: Supported 00:11:46.911 Copy: Supported 00:11:46.911 Volatile Write Cache: Present 00:11:46.911 Atomic Write Unit (Normal): 1 00:11:46.911 Atomic Write Unit (PFail): 1 00:11:46.911 Atomic Compare & Write Unit: 1 00:11:46.911 Fused Compare & Write: Not Supported 00:11:46.911 Scatter-Gather List 00:11:46.911 SGL Command Set: Supported 00:11:46.911 SGL Keyed: Not Supported 00:11:46.911 SGL Bit Bucket Descriptor: Not Supported 00:11:46.911 SGL Metadata Pointer: Not Supported 00:11:46.911 Oversized SGL: Not Supported 00:11:46.911 SGL Metadata Address: Not Supported 00:11:46.911 SGL Offset: Not Supported 00:11:46.911 Transport SGL Data Block: Not Supported 00:11:46.911 Replay Protected Memory Block: Not Supported 00:11:46.911 00:11:46.911 Firmware Slot Information 00:11:46.911 ========================= 00:11:46.911 Active slot: 1 00:11:46.911 Slot 1 Firmware Revision: 1.0 00:11:46.911 00:11:46.911 00:11:46.911 Commands Supported and Effects 00:11:46.911 ============================== 00:11:46.911 Admin Commands 00:11:46.911 -------------- 00:11:46.911 Delete I/O Submission Queue (00h): Supported 00:11:46.911 Create I/O Submission Queue (01h): Supported 00:11:46.912 Get Log Page (02h): Supported 00:11:46.912 Delete I/O Completion Queue (04h): Supported 00:11:46.912 Create I/O Completion Queue (05h): Supported 00:11:46.912 Identify (06h): Supported 00:11:46.912 Abort (08h): Supported 00:11:46.912 Set Features (09h): Supported 00:11:46.912 Get Features (0Ah): Supported 00:11:46.912 Asynchronous Event Request (0Ch): Supported 00:11:46.912 Namespace Attachment (15h): Supported NS-Inventory-Change 00:11:46.912 Directive Send (19h): Supported 00:11:46.912 Directive Receive (1Ah): Supported 00:11:46.912 Virtualization Management (1Ch): Supported 00:11:46.912 Doorbell Buffer Config (7Ch): Supported 00:11:46.912 Format NVM (80h): Supported LBA-Change 00:11:46.912 I/O Commands 00:11:46.912 ------------ 00:11:46.912 Flush (00h): Supported LBA-Change 00:11:46.912 Write (01h): Supported LBA-Change 00:11:46.912 Read (02h): Supported 00:11:46.912 Compare (05h): Supported 00:11:46.912 Write Zeroes (08h): Supported LBA-Change 00:11:46.912 Dataset Management (09h): Supported LBA-Change 00:11:46.912 Unknown (0Ch): Supported 00:11:46.912 Unknown (12h): Supported 00:11:46.912 Copy (19h): Supported LBA-Change 00:11:46.912 Unknown (1Dh): Supported LBA-Change 00:11:46.912 00:11:46.912 Error Log 00:11:46.912 ========= 00:11:46.912 00:11:46.912 Arbitration 00:11:46.912 =========== 00:11:46.912 Arbitration Burst: no limit 00:11:46.912 00:11:46.912 Power Management 00:11:46.912 ================ 00:11:46.912 Number of Power States: 1 00:11:46.912 Current Power State: Power State #0 00:11:46.912 Power State #0: 00:11:46.912 Max Power: 25.00 W 00:11:46.912 Non-Operational State: Operational 00:11:46.912 Entry Latency: 16 microseconds 00:11:46.912 Exit Latency: 4 microseconds 00:11:46.912 Relative Read Throughput: 0 00:11:46.912 Relative Read Latency: 0 00:11:46.912 Relative Write Throughput: 0 00:11:46.912 Relative Write Latency: 0 00:11:46.912 Idle Power: Not Reported 00:11:46.912 Active Power: Not Reported 00:11:46.912 Non-Operational Permissive Mode: Not Supported 00:11:46.912 00:11:46.912 Health Information 00:11:46.912 ================== 00:11:46.912 Critical Warnings: 00:11:46.912 Available Spare Space: OK 00:11:46.912 Temperature: OK 00:11:46.912 Device Reliability: OK 00:11:46.912 Read Only: No 00:11:46.912 Volatile Memory Backup: OK 00:11:46.912 Current Temperature: 323 Kelvin (50 Celsius) 00:11:46.912 Temperature Threshold: 343 Kelvin (70 Celsius) 00:11:46.912 Available Spare: 0% 00:11:46.912 Available Spare Threshold: 0% 00:11:46.912 Life Percentage Used: 0% 00:11:46.912 Data Units Read: 1016 00:11:46.912 Data Units Written: 945 00:11:46.912 Host Read Commands: 39338 00:11:46.912 Host Write Commands: 38761 00:11:46.912 Controller Busy Time: 0 minutes 00:11:46.912 Power Cycles: 0 00:11:46.912 Power On Hours: 0 hours 00:11:46.912 Unsafe Shutdowns: 0 00:11:46.912 Unrecoverable Media Errors: 0 00:11:46.912 Lifetime Error Log Entries: 0 00:11:46.912 Warning Temperature Time: 0 minutes 00:11:46.912 Critical Temperature Time: 0 minutes 00:11:46.912 00:11:46.912 Number of Queues 00:11:46.912 ================ 00:11:46.912 Number of I/O Submission Queues: 64 00:11:46.912 Number of I/O Completion Queues: 64 00:11:46.912 00:11:46.912 ZNS Specific Controller Data 00:11:46.912 ============================ 00:11:46.912 Zone Append Size Limit: 0 00:11:46.912 00:11:46.912 00:11:46.912 Active Namespaces 00:11:46.912 ================= 00:11:46.912 Namespace ID:1 00:11:46.912 Error Recovery Timeout: Unlimited 00:11:46.912 Command Set Identifier: NVM (00h) 00:11:46.912 Deallocate: Supported 00:11:46.912 Deallocated/Unwritten Error: Supported 00:11:46.912 Deallocated Read Value: All 0x00 00:11:46.912 Deallocate in Write Zeroes: Not Supported 00:11:46.912 Deallocated Guard Field: 0xFFFF 00:11:46.912 Flush: Supported 00:11:46.912 Reservation: Not Supported 00:11:46.912 Namespace Sharing Capabilities: Multiple Controllers 00:11:46.912 Size (in LBAs): 262144 (1GiB) 00:11:46.912 Capacity (in LBAs): 262144 (1GiB) 00:11:46.912 Utilization (in LBAs): 262144 (1GiB) 00:11:46.912 Thin Provisioning: Not Supported 00:11:46.912 Per-NS Atomic Units: No 00:11:46.912 Maximum Single Source Range Length: 128 00:11:46.912 Maximum Copy Length: 128 00:11:46.912 Maximum Source Range Count: 128 00:11:46.912 NGUID/EUI64 Never Reused: No 00:11:46.912 Namespace Write Protected: No 00:11:46.912 Endurance group ID: 1 00:11:46.912 Number of LBA Formats: 8 00:11:46.912 Current LBA Format: LBA Format #04 00:11:46.912 LBA Format #00: Data Size: 512 Metadata Size: 0 00:11:46.912 LBA Format #01: Data Size: 512 Metadata Size: 8 00:11:46.912 LBA Format #02: Data Size: 512 Metadata Size: 16 00:11:46.912 LBA Format #03: Data Size: 512 Metadata Size: 64 00:11:46.912 LBA Format #04: Data Size: 4096 Metadata Size: 0 00:11:46.912 LBA Format #05: Data Size: 4096 Metadata Size: 8 00:11:46.912 LBA Format #06: Data Size: 4096 Metadata Size: 16 00:11:46.912 LBA Format #07: Data Size: 4096 Metadata Size: 64 00:11:46.912 00:11:46.912 Get Feature FDP: 00:11:46.912 ================ 00:11:46.912 Enabled: Yes 00:11:46.912 FDP configuration index: 0 00:11:46.912 00:11:46.912 FDP configurations log page 00:11:46.912 =========================== 00:11:46.912 Number of FDP configurations: 1 00:11:46.912 Version: 0 00:11:46.912 Size: 112 00:11:46.912 FDP Configuration Descriptor: 0 00:11:46.912 Descriptor Size: 96 00:11:46.912 Reclaim Group Identifier format: 2 00:11:46.912 FDP Volatile Write Cache: Not Present 00:11:46.912 FDP Configuration: Valid 00:11:46.912 Vendor Specific Size: 0 00:11:46.912 Number of Reclaim Groups: 2 00:11:46.912 Number of Recalim Unit Handles: 8 00:11:46.912 Max Placement Identifiers: 128 00:11:46.912 Number of Namespaces Suppprted: 256 00:11:46.912 Reclaim unit Nominal Size: 6000000 bytes 00:11:46.912 Estimated Reclaim Unit Time Limit: Not Reported 00:11:46.912 RUH Desc #000: RUH Type: Initially Isolated 00:11:46.912 RUH Desc #001: RUH Type: Initially Isolated 00:11:46.912 RUH Desc #002: RUH Type: Initially Isolated 00:11:46.912 RUH Desc #003: RUH Type: Initially Isolated 00:11:46.912 RUH Desc #004: RUH Type: Initially Isolated 00:11:46.912 RUH Desc #005: RUH Type: Initially Isolated 00:11:46.912 RUH Desc #006: RUH Type: Initially Isolated 00:11:46.912 RUH Desc #007: RUH Type: Initially Isolated 00:11:46.912 00:11:46.912 FDP reclaim unit handle usage log page 00:11:46.912 ====================================== 00:11:46.912 Number of Reclaim Unit Handles: 8 00:11:46.912 RUH Usage Desc #000: RUH Attributes: Controller Specified 00:11:46.912 RUH Usage Desc #001: RUH Attributes: Unused 00:11:46.912 RUH Usage Desc #002: RUH Attributes: Unused 00:11:46.912 RUH Usage Desc #003: RUH Attributes: Unused 00:11:46.912 RUH Usage Desc #004: RUH Attributes: Unused 00:11:46.912 RUH Usage Desc #005: RUH Attributes: Unused 00:11:46.912 RUH Usage Desc #006: RUH Attributes: Unused 00:11:46.912 RUH Usage Desc #007: RUH Attributes: Unused 00:11:46.912 00:11:46.912 FDP statistics log page 00:11:46.912 ======================= 00:11:46.912 Host bytes with metadata written: 586850304 00:11:46.912 Media bytes with metadata written: 589090816 00:11:46.912 Media bytes erased: 0 00:11:46.912 00:11:46.912 FDP events log page 00:11:46.912 =================== 00:11:46.912 Number of FDP events: 0 00:11:46.912 00:11:46.912 NVM Specific Namespace Data 00:11:46.912 =========================== 00:11:46.912 Logical Block Storage Tag Mask: 0 00:11:46.912 Protection Information Capabilities: 00:11:46.912 16b Guard Protection Information Storage Tag Support: No 00:11:46.912 16b Guard Protection Information Storage Tag Mask: Any bit in LBSTM can be 0 00:11:46.912 Storage Tag Check Read Support: No 00:11:46.912 Extended LBA Format #00: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:11:46.912 Extended LBA Format #01: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:11:46.912 Extended LBA Format #02: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:11:46.912 Extended LBA Format #03: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:11:46.912 Extended LBA Format #04: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:11:46.912 Extended LBA Format #05: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:11:46.912 Extended LBA Format #06: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:11:46.912 Extended LBA Format #07: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:11:46.912 00:11:46.912 real 0m1.765s 00:11:46.912 user 0m0.670s 00:11:46.912 sys 0m0.864s 00:11:46.912 18:09:57 nvme.nvme_identify -- common/autotest_common.sh@1130 -- # xtrace_disable 00:11:46.912 18:09:57 nvme.nvme_identify -- common/autotest_common.sh@10 -- # set +x 00:11:46.912 ************************************ 00:11:46.912 END TEST nvme_identify 00:11:46.912 ************************************ 00:11:47.171 18:09:57 nvme -- nvme/nvme.sh@86 -- # run_test nvme_perf nvme_perf 00:11:47.171 18:09:57 nvme -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:11:47.171 18:09:57 nvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:11:47.171 18:09:57 nvme -- common/autotest_common.sh@10 -- # set +x 00:11:47.171 ************************************ 00:11:47.171 START TEST nvme_perf 00:11:47.171 ************************************ 00:11:47.171 18:09:57 nvme.nvme_perf -- common/autotest_common.sh@1129 -- # nvme_perf 00:11:47.171 18:09:57 nvme.nvme_perf -- nvme/nvme.sh@22 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -q 128 -w read -o 12288 -t 1 -LL -i 0 -N 00:11:48.550 Initializing NVMe Controllers 00:11:48.550 Attached to NVMe Controller at 0000:00:10.0 [1b36:0010] 00:11:48.550 Attached to NVMe Controller at 0000:00:11.0 [1b36:0010] 00:11:48.550 Attached to NVMe Controller at 0000:00:13.0 [1b36:0010] 00:11:48.550 Attached to NVMe Controller at 0000:00:12.0 [1b36:0010] 00:11:48.550 Associating PCIE (0000:00:10.0) NSID 1 with lcore 0 00:11:48.550 Associating PCIE (0000:00:11.0) NSID 1 with lcore 0 00:11:48.550 Associating PCIE (0000:00:13.0) NSID 1 with lcore 0 00:11:48.550 Associating PCIE (0000:00:12.0) NSID 1 with lcore 0 00:11:48.550 Associating PCIE (0000:00:12.0) NSID 2 with lcore 0 00:11:48.550 Associating PCIE (0000:00:12.0) NSID 3 with lcore 0 00:11:48.550 Initialization complete. Launching workers. 00:11:48.550 ======================================================== 00:11:48.550 Latency(us) 00:11:48.550 Device Information : IOPS MiB/s Average min max 00:11:48.550 PCIE (0000:00:10.0) NSID 1 from core 0: 13479.71 157.97 9515.90 7886.83 51374.35 00:11:48.550 PCIE (0000:00:11.0) NSID 1 from core 0: 13479.71 157.97 9501.14 7947.53 49153.80 00:11:48.550 PCIE (0000:00:13.0) NSID 1 from core 0: 13479.71 157.97 9484.77 7946.04 47644.94 00:11:48.550 PCIE (0000:00:12.0) NSID 1 from core 0: 13479.71 157.97 9467.74 7973.33 45582.13 00:11:48.550 PCIE (0000:00:12.0) NSID 2 from core 0: 13479.71 157.97 9450.74 7961.05 43532.92 00:11:48.550 PCIE (0000:00:12.0) NSID 3 from core 0: 13543.59 158.71 9389.48 8013.76 36685.79 00:11:48.550 ======================================================== 00:11:48.550 Total : 80942.14 948.54 9468.23 7886.83 51374.35 00:11:48.550 00:11:48.550 Summary latency data for PCIE (0000:00:10.0) NSID 1 from core 0: 00:11:48.550 ================================================================================= 00:11:48.550 1.00000% : 8053.822us 00:11:48.550 10.00000% : 8264.379us 00:11:48.550 25.00000% : 8527.576us 00:11:48.550 50.00000% : 8843.412us 00:11:48.550 75.00000% : 9211.888us 00:11:48.550 90.00000% : 10475.232us 00:11:48.550 95.00000% : 12738.724us 00:11:48.550 98.00000% : 15265.414us 00:11:48.550 99.00000% : 16318.201us 00:11:48.550 99.50000% : 45059.290us 00:11:48.550 99.90000% : 50954.898us 00:11:48.550 99.99000% : 51376.013us 00:11:48.550 99.99900% : 51376.013us 00:11:48.550 99.99990% : 51376.013us 00:11:48.550 99.99999% : 51376.013us 00:11:48.550 00:11:48.550 Summary latency data for PCIE (0000:00:11.0) NSID 1 from core 0: 00:11:48.550 ================================================================================= 00:11:48.550 1.00000% : 8159.100us 00:11:48.550 10.00000% : 8317.018us 00:11:48.550 25.00000% : 8527.576us 00:11:48.550 50.00000% : 8790.773us 00:11:48.550 75.00000% : 9159.248us 00:11:48.550 90.00000% : 10422.593us 00:11:48.550 95.00000% : 12633.446us 00:11:48.550 98.00000% : 15370.692us 00:11:48.550 99.00000% : 16528.758us 00:11:48.550 99.50000% : 43164.273us 00:11:48.550 99.90000% : 48849.324us 00:11:48.550 99.99000% : 49270.439us 00:11:48.550 99.99900% : 49270.439us 00:11:48.550 99.99990% : 49270.439us 00:11:48.550 99.99999% : 49270.439us 00:11:48.550 00:11:48.550 Summary latency data for PCIE (0000:00:13.0) NSID 1 from core 0: 00:11:48.550 ================================================================================= 00:11:48.551 1.00000% : 8159.100us 00:11:48.551 10.00000% : 8317.018us 00:11:48.551 25.00000% : 8527.576us 00:11:48.551 50.00000% : 8790.773us 00:11:48.551 75.00000% : 9159.248us 00:11:48.551 90.00000% : 10422.593us 00:11:48.551 95.00000% : 12633.446us 00:11:48.551 98.00000% : 15370.692us 00:11:48.551 99.00000% : 16634.037us 00:11:48.551 99.50000% : 41690.371us 00:11:48.551 99.90000% : 47375.422us 00:11:48.551 99.99000% : 47796.537us 00:11:48.551 99.99900% : 47796.537us 00:11:48.551 99.99990% : 47796.537us 00:11:48.551 99.99999% : 47796.537us 00:11:48.551 00:11:48.551 Summary latency data for PCIE (0000:00:12.0) NSID 1 from core 0: 00:11:48.551 ================================================================================= 00:11:48.551 1.00000% : 8159.100us 00:11:48.551 10.00000% : 8317.018us 00:11:48.551 25.00000% : 8527.576us 00:11:48.551 50.00000% : 8790.773us 00:11:48.551 75.00000% : 9159.248us 00:11:48.551 90.00000% : 10369.953us 00:11:48.551 95.00000% : 12528.167us 00:11:48.551 98.00000% : 15686.529us 00:11:48.551 99.00000% : 16634.037us 00:11:48.551 99.50000% : 39584.797us 00:11:48.551 99.90000% : 45269.847us 00:11:48.551 99.99000% : 45690.962us 00:11:48.551 99.99900% : 45690.962us 00:11:48.551 99.99990% : 45690.962us 00:11:48.551 99.99999% : 45690.962us 00:11:48.551 00:11:48.551 Summary latency data for PCIE (0000:00:12.0) NSID 2 from core 0: 00:11:48.551 ================================================================================= 00:11:48.551 1.00000% : 8106.461us 00:11:48.551 10.00000% : 8317.018us 00:11:48.551 25.00000% : 8527.576us 00:11:48.551 50.00000% : 8843.412us 00:11:48.551 75.00000% : 9159.248us 00:11:48.551 90.00000% : 10369.953us 00:11:48.551 95.00000% : 12633.446us 00:11:48.551 98.00000% : 15475.971us 00:11:48.551 99.00000% : 16949.873us 00:11:48.551 99.50000% : 37479.222us 00:11:48.551 99.90000% : 43164.273us 00:11:48.551 99.99000% : 43585.388us 00:11:48.551 99.99900% : 43585.388us 00:11:48.551 99.99990% : 43585.388us 00:11:48.551 99.99999% : 43585.388us 00:11:48.551 00:11:48.551 Summary latency data for PCIE (0000:00:12.0) NSID 3 from core 0: 00:11:48.551 ================================================================================= 00:11:48.551 1.00000% : 8159.100us 00:11:48.551 10.00000% : 8317.018us 00:11:48.551 25.00000% : 8527.576us 00:11:48.551 50.00000% : 8843.412us 00:11:48.551 75.00000% : 9211.888us 00:11:48.551 90.00000% : 10422.593us 00:11:48.551 95.00000% : 12791.364us 00:11:48.551 98.00000% : 15370.692us 00:11:48.551 99.00000% : 16423.480us 00:11:48.551 99.50000% : 30741.385us 00:11:48.551 99.90000% : 36426.435us 00:11:48.551 99.99000% : 36847.550us 00:11:48.551 99.99900% : 36847.550us 00:11:48.551 99.99990% : 36847.550us 00:11:48.551 99.99999% : 36847.550us 00:11:48.551 00:11:48.551 Latency histogram for PCIE (0000:00:10.0) NSID 1 from core 0: 00:11:48.551 ============================================================================== 00:11:48.551 Range in us Cumulative IO count 00:11:48.551 7843.264 - 7895.904: 0.0074% ( 1) 00:11:48.551 7895.904 - 7948.543: 0.1481% ( 19) 00:11:48.551 7948.543 - 8001.182: 0.4591% ( 42) 00:11:48.551 8001.182 - 8053.822: 1.2737% ( 110) 00:11:48.551 8053.822 - 8106.461: 2.7695% ( 202) 00:11:48.551 8106.461 - 8159.100: 4.9171% ( 290) 00:11:48.551 8159.100 - 8211.740: 7.5459% ( 355) 00:11:48.551 8211.740 - 8264.379: 10.3377% ( 377) 00:11:48.551 8264.379 - 8317.018: 13.4775% ( 424) 00:11:48.551 8317.018 - 8369.658: 16.6469% ( 428) 00:11:48.551 8369.658 - 8422.297: 20.1644% ( 475) 00:11:48.551 8422.297 - 8474.937: 23.9262% ( 508) 00:11:48.551 8474.937 - 8527.576: 27.6437% ( 502) 00:11:48.551 8527.576 - 8580.215: 31.7610% ( 556) 00:11:48.551 8580.215 - 8632.855: 35.7894% ( 544) 00:11:48.551 8632.855 - 8685.494: 39.8919% ( 554) 00:11:48.551 8685.494 - 8738.133: 44.1129% ( 570) 00:11:48.551 8738.133 - 8790.773: 48.2968% ( 565) 00:11:48.551 8790.773 - 8843.412: 52.4141% ( 556) 00:11:48.551 8843.412 - 8896.051: 56.5980% ( 565) 00:11:48.551 8896.051 - 8948.691: 60.7079% ( 555) 00:11:48.551 8948.691 - 9001.330: 64.6623% ( 534) 00:11:48.551 9001.330 - 9053.969: 68.2168% ( 480) 00:11:48.551 9053.969 - 9106.609: 71.0678% ( 385) 00:11:48.551 9106.609 - 9159.248: 73.4967% ( 328) 00:11:48.551 9159.248 - 9211.888: 75.3777% ( 254) 00:11:48.551 9211.888 - 9264.527: 77.0735% ( 229) 00:11:48.551 9264.527 - 9317.166: 78.5693% ( 202) 00:11:48.551 9317.166 - 9369.806: 79.7467% ( 159) 00:11:48.551 9369.806 - 9422.445: 80.8871% ( 154) 00:11:48.551 9422.445 - 9475.084: 81.9017% ( 137) 00:11:48.551 9475.084 - 9527.724: 82.7607% ( 116) 00:11:48.551 9527.724 - 9580.363: 83.4493% ( 93) 00:11:48.551 9580.363 - 9633.002: 84.0714% ( 84) 00:11:48.551 9633.002 - 9685.642: 84.6416% ( 77) 00:11:48.551 9685.642 - 9738.281: 85.1674% ( 71) 00:11:48.551 9738.281 - 9790.920: 85.7227% ( 75) 00:11:48.551 9790.920 - 9843.560: 86.1745% ( 61) 00:11:48.551 9843.560 - 9896.199: 86.6484% ( 64) 00:11:48.551 9896.199 - 9948.839: 87.0335% ( 52) 00:11:48.551 9948.839 - 10001.478: 87.4926% ( 62) 00:11:48.551 10001.478 - 10054.117: 87.8703% ( 51) 00:11:48.551 10054.117 - 10106.757: 88.2035% ( 45) 00:11:48.551 10106.757 - 10159.396: 88.4849% ( 38) 00:11:48.551 10159.396 - 10212.035: 88.8107% ( 44) 00:11:48.551 10212.035 - 10264.675: 89.0625% ( 34) 00:11:48.551 10264.675 - 10317.314: 89.3735% ( 42) 00:11:48.551 10317.314 - 10369.953: 89.6179% ( 33) 00:11:48.551 10369.953 - 10422.593: 89.9141% ( 40) 00:11:48.551 10422.593 - 10475.232: 90.1807% ( 36) 00:11:48.551 10475.232 - 10527.871: 90.4177% ( 32) 00:11:48.551 10527.871 - 10580.511: 90.6472% ( 31) 00:11:48.551 10580.511 - 10633.150: 90.8249% ( 24) 00:11:48.551 10633.150 - 10685.790: 91.0249% ( 27) 00:11:48.551 10685.790 - 10738.429: 91.2841% ( 35) 00:11:48.551 10738.429 - 10791.068: 91.4618% ( 24) 00:11:48.551 10791.068 - 10843.708: 91.6765% ( 29) 00:11:48.551 10843.708 - 10896.347: 91.8617% ( 25) 00:11:48.551 10896.347 - 10948.986: 92.0394% ( 24) 00:11:48.551 10948.986 - 11001.626: 92.2245% ( 25) 00:11:48.551 11001.626 - 11054.265: 92.4023% ( 24) 00:11:48.551 11054.265 - 11106.904: 92.5726% ( 23) 00:11:48.551 11106.904 - 11159.544: 92.7429% ( 23) 00:11:48.551 11159.544 - 11212.183: 92.8836% ( 19) 00:11:48.551 11212.183 - 11264.822: 92.9947% ( 15) 00:11:48.551 11264.822 - 11317.462: 93.1206% ( 17) 00:11:48.551 11317.462 - 11370.101: 93.2539% ( 18) 00:11:48.551 11370.101 - 11422.741: 93.3723% ( 16) 00:11:48.551 11422.741 - 11475.380: 93.5056% ( 18) 00:11:48.551 11475.380 - 11528.019: 93.6463% ( 19) 00:11:48.551 11528.019 - 11580.659: 93.7796% ( 18) 00:11:48.551 11580.659 - 11633.298: 93.8759% ( 13) 00:11:48.551 11633.298 - 11685.937: 93.9870% ( 15) 00:11:48.551 11685.937 - 11738.577: 94.1203% ( 18) 00:11:48.551 11738.577 - 11791.216: 94.2313% ( 15) 00:11:48.551 11791.216 - 11843.855: 94.3128% ( 11) 00:11:48.551 11843.855 - 11896.495: 94.4239% ( 15) 00:11:48.551 11896.495 - 11949.134: 94.4757% ( 7) 00:11:48.551 11949.134 - 12001.773: 94.5350% ( 8) 00:11:48.551 12001.773 - 12054.413: 94.5794% ( 6) 00:11:48.551 12054.413 - 12107.052: 94.6312% ( 7) 00:11:48.551 12107.052 - 12159.692: 94.6608% ( 4) 00:11:48.551 12159.692 - 12212.331: 94.6905% ( 4) 00:11:48.551 12212.331 - 12264.970: 94.7053% ( 2) 00:11:48.551 12264.970 - 12317.610: 94.7423% ( 5) 00:11:48.551 12317.610 - 12370.249: 94.7571% ( 2) 00:11:48.551 12370.249 - 12422.888: 94.8089% ( 7) 00:11:48.551 12422.888 - 12475.528: 94.8460% ( 5) 00:11:48.551 12475.528 - 12528.167: 94.8756% ( 4) 00:11:48.551 12528.167 - 12580.806: 94.9200% ( 6) 00:11:48.551 12580.806 - 12633.446: 94.9570% ( 5) 00:11:48.551 12633.446 - 12686.085: 94.9941% ( 5) 00:11:48.551 12686.085 - 12738.724: 95.0459% ( 7) 00:11:48.551 12738.724 - 12791.364: 95.1422% ( 13) 00:11:48.551 12791.364 - 12844.003: 95.1792% ( 5) 00:11:48.551 12844.003 - 12896.643: 95.2236% ( 6) 00:11:48.551 12896.643 - 12949.282: 95.2681% ( 6) 00:11:48.551 12949.282 - 13001.921: 95.3199% ( 7) 00:11:48.551 13001.921 - 13054.561: 95.3866% ( 9) 00:11:48.551 13054.561 - 13107.200: 95.4162% ( 4) 00:11:48.551 13107.200 - 13159.839: 95.4532% ( 5) 00:11:48.551 13159.839 - 13212.479: 95.5198% ( 9) 00:11:48.551 13212.479 - 13265.118: 95.5717% ( 7) 00:11:48.551 13265.118 - 13317.757: 95.6235% ( 7) 00:11:48.551 13317.757 - 13370.397: 95.6828% ( 8) 00:11:48.551 13370.397 - 13423.036: 95.7272% ( 6) 00:11:48.551 13423.036 - 13475.676: 95.8012% ( 10) 00:11:48.551 13475.676 - 13580.954: 95.9345% ( 18) 00:11:48.551 13580.954 - 13686.233: 96.0530% ( 16) 00:11:48.551 13686.233 - 13791.512: 96.1937% ( 19) 00:11:48.551 13791.512 - 13896.790: 96.2826% ( 12) 00:11:48.551 13896.790 - 14002.069: 96.4233% ( 19) 00:11:48.551 14002.069 - 14107.348: 96.5566% ( 18) 00:11:48.551 14107.348 - 14212.627: 96.7121% ( 21) 00:11:48.551 14212.627 - 14317.905: 96.8380% ( 17) 00:11:48.551 14317.905 - 14423.184: 96.9787% ( 19) 00:11:48.551 14423.184 - 14528.463: 97.1268% ( 20) 00:11:48.551 14528.463 - 14633.741: 97.2749% ( 20) 00:11:48.551 14633.741 - 14739.020: 97.4082% ( 18) 00:11:48.551 14739.020 - 14844.299: 97.5415% ( 18) 00:11:48.551 14844.299 - 14949.578: 97.6748% ( 18) 00:11:48.551 14949.578 - 15054.856: 97.8081% ( 18) 00:11:48.551 15054.856 - 15160.135: 97.9562% ( 20) 00:11:48.551 15160.135 - 15265.414: 98.1043% ( 20) 00:11:48.551 15265.414 - 15370.692: 98.2598% ( 21) 00:11:48.551 15370.692 - 15475.971: 98.3857% ( 17) 00:11:48.551 15475.971 - 15581.250: 98.4597% ( 10) 00:11:48.551 15581.250 - 15686.529: 98.5486% ( 12) 00:11:48.552 15686.529 - 15791.807: 98.6374% ( 12) 00:11:48.552 15791.807 - 15897.086: 98.7337% ( 13) 00:11:48.552 15897.086 - 16002.365: 98.8078% ( 10) 00:11:48.552 16002.365 - 16107.643: 98.9040% ( 13) 00:11:48.552 16107.643 - 16212.922: 98.9559% ( 7) 00:11:48.552 16212.922 - 16318.201: 99.0003% ( 6) 00:11:48.552 16318.201 - 16423.480: 99.0299% ( 4) 00:11:48.552 16423.480 - 16528.758: 99.0521% ( 3) 00:11:48.552 43164.273 - 43374.831: 99.0892% ( 5) 00:11:48.552 43374.831 - 43585.388: 99.1410% ( 7) 00:11:48.552 43585.388 - 43795.945: 99.1928% ( 7) 00:11:48.552 43795.945 - 44006.503: 99.2521% ( 8) 00:11:48.552 44006.503 - 44217.060: 99.2965% ( 6) 00:11:48.552 44217.060 - 44427.618: 99.3483% ( 7) 00:11:48.552 44427.618 - 44638.175: 99.4076% ( 8) 00:11:48.552 44638.175 - 44848.733: 99.4594% ( 7) 00:11:48.552 44848.733 - 45059.290: 99.5113% ( 7) 00:11:48.552 45059.290 - 45269.847: 99.5261% ( 2) 00:11:48.552 49270.439 - 49480.996: 99.5409% ( 2) 00:11:48.552 49480.996 - 49691.553: 99.6001% ( 8) 00:11:48.552 49691.553 - 49902.111: 99.6445% ( 6) 00:11:48.552 49902.111 - 50112.668: 99.7038% ( 8) 00:11:48.552 50112.668 - 50323.226: 99.7482% ( 6) 00:11:48.552 50323.226 - 50533.783: 99.8075% ( 8) 00:11:48.552 50533.783 - 50744.341: 99.8667% ( 8) 00:11:48.552 50744.341 - 50954.898: 99.9111% ( 6) 00:11:48.552 50954.898 - 51165.455: 99.9630% ( 7) 00:11:48.552 51165.455 - 51376.013: 100.0000% ( 5) 00:11:48.552 00:11:48.552 Latency histogram for PCIE (0000:00:11.0) NSID 1 from core 0: 00:11:48.552 ============================================================================== 00:11:48.552 Range in us Cumulative IO count 00:11:48.552 7895.904 - 7948.543: 0.0074% ( 1) 00:11:48.552 7948.543 - 8001.182: 0.0296% ( 3) 00:11:48.552 8001.182 - 8053.822: 0.1851% ( 21) 00:11:48.552 8053.822 - 8106.461: 0.6961% ( 69) 00:11:48.552 8106.461 - 8159.100: 2.0735% ( 186) 00:11:48.552 8159.100 - 8211.740: 4.2802% ( 298) 00:11:48.552 8211.740 - 8264.379: 7.1682% ( 390) 00:11:48.552 8264.379 - 8317.018: 10.1970% ( 409) 00:11:48.552 8317.018 - 8369.658: 13.7737% ( 483) 00:11:48.552 8369.658 - 8422.297: 17.6762% ( 527) 00:11:48.552 8422.297 - 8474.937: 21.9342% ( 575) 00:11:48.552 8474.937 - 8527.576: 26.3255% ( 593) 00:11:48.552 8527.576 - 8580.215: 30.7613% ( 599) 00:11:48.552 8580.215 - 8632.855: 35.2784% ( 610) 00:11:48.552 8632.855 - 8685.494: 40.2251% ( 668) 00:11:48.552 8685.494 - 8738.133: 45.1200% ( 661) 00:11:48.552 8738.133 - 8790.773: 50.0000% ( 659) 00:11:48.552 8790.773 - 8843.412: 54.8578% ( 656) 00:11:48.552 8843.412 - 8896.051: 59.7082% ( 655) 00:11:48.552 8896.051 - 8948.691: 64.0181% ( 582) 00:11:48.552 8948.691 - 9001.330: 67.7725% ( 507) 00:11:48.552 9001.330 - 9053.969: 70.7568% ( 403) 00:11:48.552 9053.969 - 9106.609: 73.1709% ( 326) 00:11:48.552 9106.609 - 9159.248: 75.2073% ( 275) 00:11:48.552 9159.248 - 9211.888: 76.9846% ( 240) 00:11:48.552 9211.888 - 9264.527: 78.4656% ( 200) 00:11:48.552 9264.527 - 9317.166: 79.6875% ( 165) 00:11:48.552 9317.166 - 9369.806: 80.7761% ( 147) 00:11:48.552 9369.806 - 9422.445: 81.6795% ( 122) 00:11:48.552 9422.445 - 9475.084: 82.5015% ( 111) 00:11:48.552 9475.084 - 9527.724: 83.1754% ( 91) 00:11:48.552 9527.724 - 9580.363: 83.8048% ( 85) 00:11:48.552 9580.363 - 9633.002: 84.3750% ( 77) 00:11:48.552 9633.002 - 9685.642: 84.9748% ( 81) 00:11:48.552 9685.642 - 9738.281: 85.4784% ( 68) 00:11:48.552 9738.281 - 9790.920: 85.9671% ( 66) 00:11:48.552 9790.920 - 9843.560: 86.3892% ( 57) 00:11:48.552 9843.560 - 9896.199: 86.8187% ( 58) 00:11:48.552 9896.199 - 9948.839: 87.2260% ( 55) 00:11:48.552 9948.839 - 10001.478: 87.6481% ( 57) 00:11:48.552 10001.478 - 10054.117: 88.0110% ( 49) 00:11:48.552 10054.117 - 10106.757: 88.3886% ( 51) 00:11:48.552 10106.757 - 10159.396: 88.7293% ( 46) 00:11:48.552 10159.396 - 10212.035: 89.0403% ( 42) 00:11:48.552 10212.035 - 10264.675: 89.3513% ( 42) 00:11:48.552 10264.675 - 10317.314: 89.6327% ( 38) 00:11:48.552 10317.314 - 10369.953: 89.8845% ( 34) 00:11:48.552 10369.953 - 10422.593: 90.1140% ( 31) 00:11:48.552 10422.593 - 10475.232: 90.3214% ( 28) 00:11:48.552 10475.232 - 10527.871: 90.4843% ( 22) 00:11:48.552 10527.871 - 10580.511: 90.5954% ( 15) 00:11:48.552 10580.511 - 10633.150: 90.6991% ( 14) 00:11:48.552 10633.150 - 10685.790: 90.7953% ( 13) 00:11:48.552 10685.790 - 10738.429: 90.9064% ( 15) 00:11:48.552 10738.429 - 10791.068: 90.9953% ( 12) 00:11:48.552 10791.068 - 10843.708: 91.1211% ( 17) 00:11:48.552 10843.708 - 10896.347: 91.2100% ( 12) 00:11:48.552 10896.347 - 10948.986: 91.3581% ( 20) 00:11:48.552 10948.986 - 11001.626: 91.5210% ( 22) 00:11:48.552 11001.626 - 11054.265: 91.6765% ( 21) 00:11:48.552 11054.265 - 11106.904: 91.8320% ( 21) 00:11:48.552 11106.904 - 11159.544: 91.9579% ( 17) 00:11:48.552 11159.544 - 11212.183: 92.1283% ( 23) 00:11:48.552 11212.183 - 11264.822: 92.2912% ( 22) 00:11:48.552 11264.822 - 11317.462: 92.4689% ( 24) 00:11:48.552 11317.462 - 11370.101: 92.6170% ( 20) 00:11:48.552 11370.101 - 11422.741: 92.7873% ( 23) 00:11:48.552 11422.741 - 11475.380: 92.9280% ( 19) 00:11:48.552 11475.380 - 11528.019: 93.0983% ( 23) 00:11:48.552 11528.019 - 11580.659: 93.2539% ( 21) 00:11:48.552 11580.659 - 11633.298: 93.4020% ( 20) 00:11:48.552 11633.298 - 11685.937: 93.5501% ( 20) 00:11:48.552 11685.937 - 11738.577: 93.6537% ( 14) 00:11:48.552 11738.577 - 11791.216: 93.7796% ( 17) 00:11:48.552 11791.216 - 11843.855: 93.8981% ( 16) 00:11:48.552 11843.855 - 11896.495: 93.9944% ( 13) 00:11:48.552 11896.495 - 11949.134: 94.0758% ( 11) 00:11:48.552 11949.134 - 12001.773: 94.1721% ( 13) 00:11:48.552 12001.773 - 12054.413: 94.2758% ( 14) 00:11:48.552 12054.413 - 12107.052: 94.3868% ( 15) 00:11:48.552 12107.052 - 12159.692: 94.4757% ( 12) 00:11:48.552 12159.692 - 12212.331: 94.5572% ( 11) 00:11:48.552 12212.331 - 12264.970: 94.6312% ( 10) 00:11:48.552 12264.970 - 12317.610: 94.6979% ( 9) 00:11:48.552 12317.610 - 12370.249: 94.7275% ( 4) 00:11:48.552 12370.249 - 12422.888: 94.7719% ( 6) 00:11:48.552 12422.888 - 12475.528: 94.8312% ( 8) 00:11:48.552 12475.528 - 12528.167: 94.8978% ( 9) 00:11:48.552 12528.167 - 12580.806: 94.9496% ( 7) 00:11:48.552 12580.806 - 12633.446: 95.0015% ( 7) 00:11:48.552 12633.446 - 12686.085: 95.0681% ( 9) 00:11:48.552 12686.085 - 12738.724: 95.1422% ( 10) 00:11:48.552 12738.724 - 12791.364: 95.2014% ( 8) 00:11:48.552 12791.364 - 12844.003: 95.2459% ( 6) 00:11:48.552 12844.003 - 12896.643: 95.3051% ( 8) 00:11:48.552 12896.643 - 12949.282: 95.3643% ( 8) 00:11:48.552 12949.282 - 13001.921: 95.4384% ( 10) 00:11:48.552 13001.921 - 13054.561: 95.4754% ( 5) 00:11:48.552 13054.561 - 13107.200: 95.5273% ( 7) 00:11:48.552 13107.200 - 13159.839: 95.5791% ( 7) 00:11:48.552 13159.839 - 13212.479: 95.6457% ( 9) 00:11:48.552 13212.479 - 13265.118: 95.6902% ( 6) 00:11:48.552 13265.118 - 13317.757: 95.7346% ( 6) 00:11:48.552 13317.757 - 13370.397: 95.7790% ( 6) 00:11:48.552 13370.397 - 13423.036: 95.8161% ( 5) 00:11:48.552 13423.036 - 13475.676: 95.8679% ( 7) 00:11:48.552 13475.676 - 13580.954: 95.9642% ( 13) 00:11:48.552 13580.954 - 13686.233: 96.0975% ( 18) 00:11:48.552 13686.233 - 13791.512: 96.2604% ( 22) 00:11:48.552 13791.512 - 13896.790: 96.4085% ( 20) 00:11:48.552 13896.790 - 14002.069: 96.5566% ( 20) 00:11:48.552 14002.069 - 14107.348: 96.7047% ( 20) 00:11:48.552 14107.348 - 14212.627: 96.8380% ( 18) 00:11:48.552 14212.627 - 14317.905: 96.9639% ( 17) 00:11:48.552 14317.905 - 14423.184: 97.0749% ( 15) 00:11:48.552 14423.184 - 14528.463: 97.1860% ( 15) 00:11:48.552 14528.463 - 14633.741: 97.3119% ( 17) 00:11:48.552 14633.741 - 14739.020: 97.4304% ( 16) 00:11:48.552 14739.020 - 14844.299: 97.5193% ( 12) 00:11:48.552 14844.299 - 14949.578: 97.6525% ( 18) 00:11:48.552 14949.578 - 15054.856: 97.7562% ( 14) 00:11:48.552 15054.856 - 15160.135: 97.8821% ( 17) 00:11:48.552 15160.135 - 15265.414: 97.9858% ( 14) 00:11:48.552 15265.414 - 15370.692: 98.0820% ( 13) 00:11:48.552 15370.692 - 15475.971: 98.2005% ( 16) 00:11:48.552 15475.971 - 15581.250: 98.3264% ( 17) 00:11:48.552 15581.250 - 15686.529: 98.4449% ( 16) 00:11:48.552 15686.529 - 15791.807: 98.5560% ( 15) 00:11:48.552 15791.807 - 15897.086: 98.6745% ( 16) 00:11:48.552 15897.086 - 16002.365: 98.7707% ( 13) 00:11:48.552 16002.365 - 16107.643: 98.8448% ( 10) 00:11:48.552 16107.643 - 16212.922: 98.9040% ( 8) 00:11:48.552 16212.922 - 16318.201: 98.9485% ( 6) 00:11:48.552 16318.201 - 16423.480: 98.9929% ( 6) 00:11:48.552 16423.480 - 16528.758: 99.0299% ( 5) 00:11:48.552 16528.758 - 16634.037: 99.0521% ( 3) 00:11:48.552 41269.256 - 41479.814: 99.0966% ( 6) 00:11:48.552 41479.814 - 41690.371: 99.1558% ( 8) 00:11:48.552 41690.371 - 41900.929: 99.2076% ( 7) 00:11:48.552 41900.929 - 42111.486: 99.2595% ( 7) 00:11:48.552 42111.486 - 42322.043: 99.3187% ( 8) 00:11:48.552 42322.043 - 42532.601: 99.3780% ( 8) 00:11:48.552 42532.601 - 42743.158: 99.4298% ( 7) 00:11:48.552 42743.158 - 42953.716: 99.4890% ( 8) 00:11:48.552 42953.716 - 43164.273: 99.5261% ( 5) 00:11:48.552 47375.422 - 47585.979: 99.5853% ( 8) 00:11:48.552 47585.979 - 47796.537: 99.6371% ( 7) 00:11:48.552 47796.537 - 48007.094: 99.6890% ( 7) 00:11:48.552 48007.094 - 48217.651: 99.7482% ( 8) 00:11:48.552 48217.651 - 48428.209: 99.8149% ( 9) 00:11:48.552 48428.209 - 48638.766: 99.8667% ( 7) 00:11:48.552 48638.766 - 48849.324: 99.9111% ( 6) 00:11:48.552 48849.324 - 49059.881: 99.9704% ( 8) 00:11:48.552 49059.881 - 49270.439: 100.0000% ( 4) 00:11:48.552 00:11:48.552 Latency histogram for PCIE (0000:00:13.0) NSID 1 from core 0: 00:11:48.553 ============================================================================== 00:11:48.553 Range in us Cumulative IO count 00:11:48.553 7895.904 - 7948.543: 0.0074% ( 1) 00:11:48.553 7948.543 - 8001.182: 0.0370% ( 4) 00:11:48.553 8001.182 - 8053.822: 0.2740% ( 32) 00:11:48.553 8053.822 - 8106.461: 0.9034% ( 85) 00:11:48.553 8106.461 - 8159.100: 2.1623% ( 170) 00:11:48.553 8159.100 - 8211.740: 4.3913% ( 301) 00:11:48.553 8211.740 - 8264.379: 7.1534% ( 373) 00:11:48.553 8264.379 - 8317.018: 10.5895% ( 464) 00:11:48.553 8317.018 - 8369.658: 14.2032% ( 488) 00:11:48.553 8369.658 - 8422.297: 17.9428% ( 505) 00:11:48.553 8422.297 - 8474.937: 22.1342% ( 566) 00:11:48.553 8474.937 - 8527.576: 26.5329% ( 594) 00:11:48.553 8527.576 - 8580.215: 31.0056% ( 604) 00:11:48.553 8580.215 - 8632.855: 35.6783% ( 631) 00:11:48.553 8632.855 - 8685.494: 40.3732% ( 634) 00:11:48.553 8685.494 - 8738.133: 45.3051% ( 666) 00:11:48.553 8738.133 - 8790.773: 50.1925% ( 660) 00:11:48.553 8790.773 - 8843.412: 55.1170% ( 665) 00:11:48.553 8843.412 - 8896.051: 59.8860% ( 644) 00:11:48.553 8896.051 - 8948.691: 64.2106% ( 584) 00:11:48.553 8948.691 - 9001.330: 67.8762% ( 495) 00:11:48.553 9001.330 - 9053.969: 70.9345% ( 413) 00:11:48.553 9053.969 - 9106.609: 73.4301% ( 337) 00:11:48.553 9106.609 - 9159.248: 75.4369% ( 271) 00:11:48.553 9159.248 - 9211.888: 77.1993% ( 238) 00:11:48.553 9211.888 - 9264.527: 78.6656% ( 198) 00:11:48.553 9264.527 - 9317.166: 79.9245% ( 170) 00:11:48.553 9317.166 - 9369.806: 80.9760% ( 142) 00:11:48.553 9369.806 - 9422.445: 81.8720% ( 121) 00:11:48.553 9422.445 - 9475.084: 82.7162% ( 114) 00:11:48.553 9475.084 - 9527.724: 83.4568% ( 100) 00:11:48.553 9527.724 - 9580.363: 83.9973% ( 73) 00:11:48.553 9580.363 - 9633.002: 84.5305% ( 72) 00:11:48.553 9633.002 - 9685.642: 85.0267% ( 67) 00:11:48.553 9685.642 - 9738.281: 85.5524% ( 71) 00:11:48.553 9738.281 - 9790.920: 85.9597% ( 55) 00:11:48.553 9790.920 - 9843.560: 86.3522% ( 53) 00:11:48.553 9843.560 - 9896.199: 86.7076% ( 48) 00:11:48.553 9896.199 - 9948.839: 87.1297% ( 57) 00:11:48.553 9948.839 - 10001.478: 87.5074% ( 51) 00:11:48.553 10001.478 - 10054.117: 87.9147% ( 55) 00:11:48.553 10054.117 - 10106.757: 88.2924% ( 51) 00:11:48.553 10106.757 - 10159.396: 88.7219% ( 58) 00:11:48.553 10159.396 - 10212.035: 89.0329% ( 42) 00:11:48.553 10212.035 - 10264.675: 89.3291% ( 40) 00:11:48.553 10264.675 - 10317.314: 89.5883% ( 35) 00:11:48.553 10317.314 - 10369.953: 89.8178% ( 31) 00:11:48.553 10369.953 - 10422.593: 90.0474% ( 31) 00:11:48.553 10422.593 - 10475.232: 90.2325% ( 25) 00:11:48.553 10475.232 - 10527.871: 90.4399% ( 28) 00:11:48.553 10527.871 - 10580.511: 90.6102% ( 23) 00:11:48.553 10580.511 - 10633.150: 90.7509% ( 19) 00:11:48.553 10633.150 - 10685.790: 90.8990% ( 20) 00:11:48.553 10685.790 - 10738.429: 91.0323% ( 18) 00:11:48.553 10738.429 - 10791.068: 91.1582% ( 17) 00:11:48.553 10791.068 - 10843.708: 91.3137% ( 21) 00:11:48.553 10843.708 - 10896.347: 91.4914% ( 24) 00:11:48.553 10896.347 - 10948.986: 91.6839% ( 26) 00:11:48.553 10948.986 - 11001.626: 91.8617% ( 24) 00:11:48.553 11001.626 - 11054.265: 92.0616% ( 27) 00:11:48.553 11054.265 - 11106.904: 92.2097% ( 20) 00:11:48.553 11106.904 - 11159.544: 92.3504% ( 19) 00:11:48.553 11159.544 - 11212.183: 92.5133% ( 22) 00:11:48.553 11212.183 - 11264.822: 92.6688% ( 21) 00:11:48.553 11264.822 - 11317.462: 92.8243% ( 21) 00:11:48.553 11317.462 - 11370.101: 92.9873% ( 22) 00:11:48.553 11370.101 - 11422.741: 93.1206% ( 18) 00:11:48.553 11422.741 - 11475.380: 93.2390% ( 16) 00:11:48.553 11475.380 - 11528.019: 93.3871% ( 20) 00:11:48.553 11528.019 - 11580.659: 93.5056% ( 16) 00:11:48.553 11580.659 - 11633.298: 93.6167% ( 15) 00:11:48.553 11633.298 - 11685.937: 93.7204% ( 14) 00:11:48.553 11685.937 - 11738.577: 93.8389% ( 16) 00:11:48.553 11738.577 - 11791.216: 93.9573% ( 16) 00:11:48.553 11791.216 - 11843.855: 94.0536% ( 13) 00:11:48.553 11843.855 - 11896.495: 94.1647% ( 15) 00:11:48.553 11896.495 - 11949.134: 94.2536% ( 12) 00:11:48.553 11949.134 - 12001.773: 94.3276% ( 10) 00:11:48.553 12001.773 - 12054.413: 94.3943% ( 9) 00:11:48.553 12054.413 - 12107.052: 94.4757% ( 11) 00:11:48.553 12107.052 - 12159.692: 94.5424% ( 9) 00:11:48.553 12159.692 - 12212.331: 94.6016% ( 8) 00:11:48.553 12212.331 - 12264.970: 94.6460% ( 6) 00:11:48.553 12264.970 - 12317.610: 94.6831% ( 5) 00:11:48.553 12317.610 - 12370.249: 94.7423% ( 8) 00:11:48.553 12370.249 - 12422.888: 94.7793% ( 5) 00:11:48.553 12422.888 - 12475.528: 94.8386% ( 8) 00:11:48.553 12475.528 - 12528.167: 94.9274% ( 12) 00:11:48.553 12528.167 - 12580.806: 94.9941% ( 9) 00:11:48.553 12580.806 - 12633.446: 95.0311% ( 5) 00:11:48.553 12633.446 - 12686.085: 95.0829% ( 7) 00:11:48.553 12686.085 - 12738.724: 95.1348% ( 7) 00:11:48.553 12738.724 - 12791.364: 95.1866% ( 7) 00:11:48.553 12791.364 - 12844.003: 95.2310% ( 6) 00:11:48.553 12844.003 - 12896.643: 95.2829% ( 7) 00:11:48.553 12896.643 - 12949.282: 95.3199% ( 5) 00:11:48.553 12949.282 - 13001.921: 95.3643% ( 6) 00:11:48.553 13001.921 - 13054.561: 95.4014% ( 5) 00:11:48.553 13054.561 - 13107.200: 95.4310% ( 4) 00:11:48.553 13107.200 - 13159.839: 95.4532% ( 3) 00:11:48.553 13159.839 - 13212.479: 95.4902% ( 5) 00:11:48.553 13212.479 - 13265.118: 95.5198% ( 4) 00:11:48.553 13265.118 - 13317.757: 95.5643% ( 6) 00:11:48.553 13317.757 - 13370.397: 95.6013% ( 5) 00:11:48.553 13370.397 - 13423.036: 95.6457% ( 6) 00:11:48.553 13423.036 - 13475.676: 95.7050% ( 8) 00:11:48.553 13475.676 - 13580.954: 95.8235% ( 16) 00:11:48.553 13580.954 - 13686.233: 95.9197% ( 13) 00:11:48.553 13686.233 - 13791.512: 96.0530% ( 18) 00:11:48.553 13791.512 - 13896.790: 96.1789% ( 17) 00:11:48.553 13896.790 - 14002.069: 96.3196% ( 19) 00:11:48.553 14002.069 - 14107.348: 96.4603% ( 19) 00:11:48.553 14107.348 - 14212.627: 96.6010% ( 19) 00:11:48.553 14212.627 - 14317.905: 96.7491% ( 20) 00:11:48.553 14317.905 - 14423.184: 96.8602% ( 15) 00:11:48.553 14423.184 - 14528.463: 96.9416% ( 11) 00:11:48.553 14528.463 - 14633.741: 97.0601% ( 16) 00:11:48.553 14633.741 - 14739.020: 97.1712% ( 15) 00:11:48.553 14739.020 - 14844.299: 97.2897% ( 16) 00:11:48.553 14844.299 - 14949.578: 97.4378% ( 20) 00:11:48.553 14949.578 - 15054.856: 97.5933% ( 21) 00:11:48.553 15054.856 - 15160.135: 97.7488% ( 21) 00:11:48.553 15160.135 - 15265.414: 97.9339% ( 25) 00:11:48.553 15265.414 - 15370.692: 98.0895% ( 21) 00:11:48.553 15370.692 - 15475.971: 98.2153% ( 17) 00:11:48.553 15475.971 - 15581.250: 98.3042% ( 12) 00:11:48.553 15581.250 - 15686.529: 98.4005% ( 13) 00:11:48.553 15686.529 - 15791.807: 98.4967% ( 13) 00:11:48.553 15791.807 - 15897.086: 98.5930% ( 13) 00:11:48.553 15897.086 - 16002.365: 98.6819% ( 12) 00:11:48.553 16002.365 - 16107.643: 98.7855% ( 14) 00:11:48.553 16107.643 - 16212.922: 98.8670% ( 11) 00:11:48.553 16212.922 - 16318.201: 98.9188% ( 7) 00:11:48.553 16318.201 - 16423.480: 98.9485% ( 4) 00:11:48.553 16423.480 - 16528.758: 98.9781% ( 4) 00:11:48.553 16528.758 - 16634.037: 99.0003% ( 3) 00:11:48.553 16634.037 - 16739.316: 99.0299% ( 4) 00:11:48.553 16739.316 - 16844.594: 99.0521% ( 3) 00:11:48.553 39584.797 - 39795.354: 99.0669% ( 2) 00:11:48.553 39795.354 - 40005.912: 99.1188% ( 7) 00:11:48.553 40005.912 - 40216.469: 99.1706% ( 7) 00:11:48.553 40216.469 - 40427.027: 99.2225% ( 7) 00:11:48.553 40427.027 - 40637.584: 99.2817% ( 8) 00:11:48.553 40637.584 - 40848.141: 99.3335% ( 7) 00:11:48.553 40848.141 - 41058.699: 99.3928% ( 8) 00:11:48.553 41058.699 - 41269.256: 99.4520% ( 8) 00:11:48.553 41269.256 - 41479.814: 99.4964% ( 6) 00:11:48.553 41479.814 - 41690.371: 99.5261% ( 4) 00:11:48.553 45690.962 - 45901.520: 99.5409% ( 2) 00:11:48.553 45901.520 - 46112.077: 99.5927% ( 7) 00:11:48.553 46112.077 - 46322.635: 99.6445% ( 7) 00:11:48.553 46322.635 - 46533.192: 99.7038% ( 8) 00:11:48.553 46533.192 - 46743.749: 99.7556% ( 7) 00:11:48.553 46743.749 - 46954.307: 99.8149% ( 8) 00:11:48.553 46954.307 - 47164.864: 99.8667% ( 7) 00:11:48.553 47164.864 - 47375.422: 99.9259% ( 8) 00:11:48.553 47375.422 - 47585.979: 99.9778% ( 7) 00:11:48.553 47585.979 - 47796.537: 100.0000% ( 3) 00:11:48.553 00:11:48.553 Latency histogram for PCIE (0000:00:12.0) NSID 1 from core 0: 00:11:48.553 ============================================================================== 00:11:48.553 Range in us Cumulative IO count 00:11:48.553 7948.543 - 8001.182: 0.0518% ( 7) 00:11:48.553 8001.182 - 8053.822: 0.2592% ( 28) 00:11:48.553 8053.822 - 8106.461: 0.8886% ( 85) 00:11:48.553 8106.461 - 8159.100: 2.5992% ( 231) 00:11:48.553 8159.100 - 8211.740: 4.6875% ( 282) 00:11:48.553 8211.740 - 8264.379: 7.6570% ( 401) 00:11:48.553 8264.379 - 8317.018: 10.8560% ( 432) 00:11:48.553 8317.018 - 8369.658: 14.3513% ( 472) 00:11:48.553 8369.658 - 8422.297: 18.2613% ( 528) 00:11:48.553 8422.297 - 8474.937: 22.4600% ( 567) 00:11:48.553 8474.937 - 8527.576: 26.7698% ( 582) 00:11:48.553 8527.576 - 8580.215: 31.2426% ( 604) 00:11:48.553 8580.215 - 8632.855: 35.8338% ( 620) 00:11:48.553 8632.855 - 8685.494: 40.4251% ( 620) 00:11:48.553 8685.494 - 8738.133: 45.1718% ( 641) 00:11:48.553 8738.133 - 8790.773: 50.0666% ( 661) 00:11:48.553 8790.773 - 8843.412: 55.0281% ( 670) 00:11:48.553 8843.412 - 8896.051: 59.7823% ( 642) 00:11:48.553 8896.051 - 8948.691: 64.0329% ( 574) 00:11:48.553 8948.691 - 9001.330: 67.6466% ( 488) 00:11:48.553 9001.330 - 9053.969: 70.8605% ( 434) 00:11:48.553 9053.969 - 9106.609: 73.5708% ( 366) 00:11:48.553 9106.609 - 9159.248: 75.7257% ( 291) 00:11:48.554 9159.248 - 9211.888: 77.4733% ( 236) 00:11:48.554 9211.888 - 9264.527: 78.8951% ( 192) 00:11:48.554 9264.527 - 9317.166: 80.0207% ( 152) 00:11:48.554 9317.166 - 9369.806: 81.0278% ( 136) 00:11:48.554 9369.806 - 9422.445: 81.9461% ( 124) 00:11:48.554 9422.445 - 9475.084: 82.8051% ( 116) 00:11:48.554 9475.084 - 9527.724: 83.4864% ( 92) 00:11:48.554 9527.724 - 9580.363: 84.1232% ( 86) 00:11:48.554 9580.363 - 9633.002: 84.6564% ( 72) 00:11:48.554 9633.002 - 9685.642: 85.1525% ( 67) 00:11:48.554 9685.642 - 9738.281: 85.6265% ( 64) 00:11:48.554 9738.281 - 9790.920: 86.1004% ( 64) 00:11:48.554 9790.920 - 9843.560: 86.5003% ( 54) 00:11:48.554 9843.560 - 9896.199: 86.9076% ( 55) 00:11:48.554 9896.199 - 9948.839: 87.3815% ( 64) 00:11:48.554 9948.839 - 10001.478: 87.8036% ( 57) 00:11:48.554 10001.478 - 10054.117: 88.1961% ( 53) 00:11:48.554 10054.117 - 10106.757: 88.5515% ( 48) 00:11:48.554 10106.757 - 10159.396: 88.8181% ( 36) 00:11:48.554 10159.396 - 10212.035: 89.1514% ( 45) 00:11:48.554 10212.035 - 10264.675: 89.5216% ( 50) 00:11:48.554 10264.675 - 10317.314: 89.7956% ( 37) 00:11:48.554 10317.314 - 10369.953: 90.0326% ( 32) 00:11:48.554 10369.953 - 10422.593: 90.2473% ( 29) 00:11:48.554 10422.593 - 10475.232: 90.4325% ( 25) 00:11:48.554 10475.232 - 10527.871: 90.5954% ( 22) 00:11:48.554 10527.871 - 10580.511: 90.7805% ( 25) 00:11:48.554 10580.511 - 10633.150: 90.9286% ( 20) 00:11:48.554 10633.150 - 10685.790: 91.0767% ( 20) 00:11:48.554 10685.790 - 10738.429: 91.2248% ( 20) 00:11:48.554 10738.429 - 10791.068: 91.3655% ( 19) 00:11:48.554 10791.068 - 10843.708: 91.4840% ( 16) 00:11:48.554 10843.708 - 10896.347: 91.6173% ( 18) 00:11:48.554 10896.347 - 10948.986: 91.7802% ( 22) 00:11:48.554 10948.986 - 11001.626: 91.9209% ( 19) 00:11:48.554 11001.626 - 11054.265: 92.0764% ( 21) 00:11:48.554 11054.265 - 11106.904: 92.2541% ( 24) 00:11:48.554 11106.904 - 11159.544: 92.4097% ( 21) 00:11:48.554 11159.544 - 11212.183: 92.5948% ( 25) 00:11:48.554 11212.183 - 11264.822: 92.7429% ( 20) 00:11:48.554 11264.822 - 11317.462: 92.9206% ( 24) 00:11:48.554 11317.462 - 11370.101: 93.0687% ( 20) 00:11:48.554 11370.101 - 11422.741: 93.1798% ( 15) 00:11:48.554 11422.741 - 11475.380: 93.2687% ( 12) 00:11:48.554 11475.380 - 11528.019: 93.3575% ( 12) 00:11:48.554 11528.019 - 11580.659: 93.4760% ( 16) 00:11:48.554 11580.659 - 11633.298: 93.5871% ( 15) 00:11:48.554 11633.298 - 11685.937: 93.6834% ( 13) 00:11:48.554 11685.937 - 11738.577: 93.7870% ( 14) 00:11:48.554 11738.577 - 11791.216: 93.8907% ( 14) 00:11:48.554 11791.216 - 11843.855: 93.9722% ( 11) 00:11:48.554 11843.855 - 11896.495: 94.0462% ( 10) 00:11:48.554 11896.495 - 11949.134: 94.1277% ( 11) 00:11:48.554 11949.134 - 12001.773: 94.2091% ( 11) 00:11:48.554 12001.773 - 12054.413: 94.2758% ( 9) 00:11:48.554 12054.413 - 12107.052: 94.3646% ( 12) 00:11:48.554 12107.052 - 12159.692: 94.4683% ( 14) 00:11:48.554 12159.692 - 12212.331: 94.5720% ( 14) 00:11:48.554 12212.331 - 12264.970: 94.6534% ( 11) 00:11:48.554 12264.970 - 12317.610: 94.7201% ( 9) 00:11:48.554 12317.610 - 12370.249: 94.8164% ( 13) 00:11:48.554 12370.249 - 12422.888: 94.9052% ( 12) 00:11:48.554 12422.888 - 12475.528: 94.9867% ( 11) 00:11:48.554 12475.528 - 12528.167: 95.0459% ( 8) 00:11:48.554 12528.167 - 12580.806: 95.0829% ( 5) 00:11:48.554 12580.806 - 12633.446: 95.1274% ( 6) 00:11:48.554 12633.446 - 12686.085: 95.1718% ( 6) 00:11:48.554 12686.085 - 12738.724: 95.2236% ( 7) 00:11:48.554 12738.724 - 12791.364: 95.2681% ( 6) 00:11:48.554 12791.364 - 12844.003: 95.3051% ( 5) 00:11:48.554 12844.003 - 12896.643: 95.3495% ( 6) 00:11:48.554 12896.643 - 12949.282: 95.3940% ( 6) 00:11:48.554 12949.282 - 13001.921: 95.4310% ( 5) 00:11:48.554 13001.921 - 13054.561: 95.4828% ( 7) 00:11:48.554 13054.561 - 13107.200: 95.5347% ( 7) 00:11:48.554 13107.200 - 13159.839: 95.5865% ( 7) 00:11:48.554 13159.839 - 13212.479: 95.6087% ( 3) 00:11:48.554 13212.479 - 13265.118: 95.6383% ( 4) 00:11:48.554 13265.118 - 13317.757: 95.6680% ( 4) 00:11:48.554 13317.757 - 13370.397: 95.7124% ( 6) 00:11:48.554 13370.397 - 13423.036: 95.7642% ( 7) 00:11:48.554 13423.036 - 13475.676: 95.7864% ( 3) 00:11:48.554 13475.676 - 13580.954: 95.8679% ( 11) 00:11:48.554 13580.954 - 13686.233: 95.9642% ( 13) 00:11:48.554 13686.233 - 13791.512: 96.0308% ( 9) 00:11:48.554 13791.512 - 13896.790: 96.1197% ( 12) 00:11:48.554 13896.790 - 14002.069: 96.1789% ( 8) 00:11:48.554 14002.069 - 14107.348: 96.2159% ( 5) 00:11:48.554 14107.348 - 14212.627: 96.2678% ( 7) 00:11:48.554 14212.627 - 14317.905: 96.3566% ( 12) 00:11:48.554 14317.905 - 14423.184: 96.4455% ( 12) 00:11:48.554 14423.184 - 14528.463: 96.5418% ( 13) 00:11:48.554 14528.463 - 14633.741: 96.6677% ( 17) 00:11:48.554 14633.741 - 14739.020: 96.7861% ( 16) 00:11:48.554 14739.020 - 14844.299: 96.9194% ( 18) 00:11:48.554 14844.299 - 14949.578: 97.0675% ( 20) 00:11:48.554 14949.578 - 15054.856: 97.2008% ( 18) 00:11:48.554 15054.856 - 15160.135: 97.3193% ( 16) 00:11:48.554 15160.135 - 15265.414: 97.4822% ( 22) 00:11:48.554 15265.414 - 15370.692: 97.6377% ( 21) 00:11:48.554 15370.692 - 15475.971: 97.8155% ( 24) 00:11:48.554 15475.971 - 15581.250: 97.9932% ( 24) 00:11:48.554 15581.250 - 15686.529: 98.1709% ( 24) 00:11:48.554 15686.529 - 15791.807: 98.2968% ( 17) 00:11:48.554 15791.807 - 15897.086: 98.3931% ( 13) 00:11:48.554 15897.086 - 16002.365: 98.5041% ( 15) 00:11:48.554 16002.365 - 16107.643: 98.6078% ( 14) 00:11:48.554 16107.643 - 16212.922: 98.7041% ( 13) 00:11:48.554 16212.922 - 16318.201: 98.8004% ( 13) 00:11:48.554 16318.201 - 16423.480: 98.8892% ( 12) 00:11:48.554 16423.480 - 16528.758: 98.9485% ( 8) 00:11:48.554 16528.758 - 16634.037: 99.0003% ( 7) 00:11:48.554 16634.037 - 16739.316: 99.0373% ( 5) 00:11:48.554 16739.316 - 16844.594: 99.0521% ( 2) 00:11:48.554 37689.780 - 37900.337: 99.1040% ( 7) 00:11:48.554 37900.337 - 38110.895: 99.1632% ( 8) 00:11:48.554 38110.895 - 38321.452: 99.2150% ( 7) 00:11:48.554 38321.452 - 38532.010: 99.2743% ( 8) 00:11:48.554 38532.010 - 38742.567: 99.3261% ( 7) 00:11:48.554 38742.567 - 38953.124: 99.3780% ( 7) 00:11:48.554 38953.124 - 39163.682: 99.4372% ( 8) 00:11:48.554 39163.682 - 39374.239: 99.4890% ( 7) 00:11:48.554 39374.239 - 39584.797: 99.5261% ( 5) 00:11:48.554 43795.945 - 44006.503: 99.5779% ( 7) 00:11:48.554 44006.503 - 44217.060: 99.6297% ( 7) 00:11:48.554 44217.060 - 44427.618: 99.6890% ( 8) 00:11:48.554 44427.618 - 44638.175: 99.7408% ( 7) 00:11:48.554 44638.175 - 44848.733: 99.8001% ( 8) 00:11:48.554 44848.733 - 45059.290: 99.8519% ( 7) 00:11:48.554 45059.290 - 45269.847: 99.9185% ( 9) 00:11:48.554 45269.847 - 45480.405: 99.9704% ( 7) 00:11:48.554 45480.405 - 45690.962: 100.0000% ( 4) 00:11:48.554 00:11:48.554 Latency histogram for PCIE (0000:00:12.0) NSID 2 from core 0: 00:11:48.554 ============================================================================== 00:11:48.554 Range in us Cumulative IO count 00:11:48.554 7948.543 - 8001.182: 0.0370% ( 5) 00:11:48.554 8001.182 - 8053.822: 0.2222% ( 25) 00:11:48.554 8053.822 - 8106.461: 1.0219% ( 108) 00:11:48.554 8106.461 - 8159.100: 2.3400% ( 178) 00:11:48.554 8159.100 - 8211.740: 4.3469% ( 271) 00:11:48.554 8211.740 - 8264.379: 7.3238% ( 402) 00:11:48.554 8264.379 - 8317.018: 10.4414% ( 421) 00:11:48.554 8317.018 - 8369.658: 14.0699% ( 490) 00:11:48.554 8369.658 - 8422.297: 17.9725% ( 527) 00:11:48.554 8422.297 - 8474.937: 22.1564% ( 565) 00:11:48.554 8474.937 - 8527.576: 26.4588% ( 581) 00:11:48.554 8527.576 - 8580.215: 30.7464% ( 579) 00:11:48.554 8580.215 - 8632.855: 35.3229% ( 618) 00:11:48.554 8632.855 - 8685.494: 39.9363% ( 623) 00:11:48.554 8685.494 - 8738.133: 44.6312% ( 634) 00:11:48.554 8738.133 - 8790.773: 49.4668% ( 653) 00:11:48.554 8790.773 - 8843.412: 54.3469% ( 659) 00:11:48.554 8843.412 - 8896.051: 59.0418% ( 634) 00:11:48.554 8896.051 - 8948.691: 63.2924% ( 574) 00:11:48.554 8948.691 - 9001.330: 66.9431% ( 493) 00:11:48.554 9001.330 - 9053.969: 70.1792% ( 437) 00:11:48.554 9053.969 - 9106.609: 72.8451% ( 360) 00:11:48.554 9106.609 - 9159.248: 75.0148% ( 293) 00:11:48.554 9159.248 - 9211.888: 76.7032% ( 228) 00:11:48.554 9211.888 - 9264.527: 78.1842% ( 200) 00:11:48.554 9264.527 - 9317.166: 79.5098% ( 179) 00:11:48.554 9317.166 - 9369.806: 80.6576% ( 155) 00:11:48.554 9369.806 - 9422.445: 81.6129% ( 129) 00:11:48.554 9422.445 - 9475.084: 82.3904% ( 105) 00:11:48.554 9475.084 - 9527.724: 83.1087% ( 97) 00:11:48.554 9527.724 - 9580.363: 83.7307% ( 84) 00:11:48.554 9580.363 - 9633.002: 84.3528% ( 84) 00:11:48.554 9633.002 - 9685.642: 84.9008% ( 74) 00:11:48.554 9685.642 - 9738.281: 85.4117% ( 69) 00:11:48.554 9738.281 - 9790.920: 85.9153% ( 68) 00:11:48.554 9790.920 - 9843.560: 86.4411% ( 71) 00:11:48.554 9843.560 - 9896.199: 86.9668% ( 71) 00:11:48.554 9896.199 - 9948.839: 87.3889% ( 57) 00:11:48.554 9948.839 - 10001.478: 87.7814% ( 53) 00:11:48.554 10001.478 - 10054.117: 88.1517% ( 50) 00:11:48.554 10054.117 - 10106.757: 88.5145% ( 49) 00:11:48.554 10106.757 - 10159.396: 88.8922% ( 51) 00:11:48.554 10159.396 - 10212.035: 89.2921% ( 54) 00:11:48.554 10212.035 - 10264.675: 89.6105% ( 43) 00:11:48.554 10264.675 - 10317.314: 89.8919% ( 38) 00:11:48.554 10317.314 - 10369.953: 90.1363% ( 33) 00:11:48.554 10369.953 - 10422.593: 90.3510% ( 29) 00:11:48.554 10422.593 - 10475.232: 90.5361% ( 25) 00:11:48.554 10475.232 - 10527.871: 90.7139% ( 24) 00:11:48.554 10527.871 - 10580.511: 90.9138% ( 27) 00:11:48.554 10580.511 - 10633.150: 91.0841% ( 23) 00:11:48.554 10633.150 - 10685.790: 91.2618% ( 24) 00:11:48.555 10685.790 - 10738.429: 91.4322% ( 23) 00:11:48.555 10738.429 - 10791.068: 91.6321% ( 27) 00:11:48.555 10791.068 - 10843.708: 91.8098% ( 24) 00:11:48.555 10843.708 - 10896.347: 92.0098% ( 27) 00:11:48.555 10896.347 - 10948.986: 92.1801% ( 23) 00:11:48.555 10948.986 - 11001.626: 92.3726% ( 26) 00:11:48.555 11001.626 - 11054.265: 92.5430% ( 23) 00:11:48.555 11054.265 - 11106.904: 92.6911% ( 20) 00:11:48.555 11106.904 - 11159.544: 92.8243% ( 18) 00:11:48.555 11159.544 - 11212.183: 92.9058% ( 11) 00:11:48.555 11212.183 - 11264.822: 92.9947% ( 12) 00:11:48.555 11264.822 - 11317.462: 93.0687% ( 10) 00:11:48.555 11317.462 - 11370.101: 93.1650% ( 13) 00:11:48.555 11370.101 - 11422.741: 93.2687% ( 14) 00:11:48.555 11422.741 - 11475.380: 93.3649% ( 13) 00:11:48.555 11475.380 - 11528.019: 93.4464% ( 11) 00:11:48.555 11528.019 - 11580.659: 93.5352% ( 12) 00:11:48.555 11580.659 - 11633.298: 93.6315% ( 13) 00:11:48.555 11633.298 - 11685.937: 93.7352% ( 14) 00:11:48.555 11685.937 - 11738.577: 93.8315% ( 13) 00:11:48.555 11738.577 - 11791.216: 93.9129% ( 11) 00:11:48.555 11791.216 - 11843.855: 93.9944% ( 11) 00:11:48.555 11843.855 - 11896.495: 94.0462% ( 7) 00:11:48.555 11896.495 - 11949.134: 94.1203% ( 10) 00:11:48.555 11949.134 - 12001.773: 94.1869% ( 9) 00:11:48.555 12001.773 - 12054.413: 94.2461% ( 8) 00:11:48.555 12054.413 - 12107.052: 94.2980% ( 7) 00:11:48.555 12107.052 - 12159.692: 94.3498% ( 7) 00:11:48.555 12159.692 - 12212.331: 94.4165% ( 9) 00:11:48.555 12212.331 - 12264.970: 94.4905% ( 10) 00:11:48.555 12264.970 - 12317.610: 94.5942% ( 14) 00:11:48.555 12317.610 - 12370.249: 94.6608% ( 9) 00:11:48.555 12370.249 - 12422.888: 94.7201% ( 8) 00:11:48.555 12422.888 - 12475.528: 94.7941% ( 10) 00:11:48.555 12475.528 - 12528.167: 94.8756% ( 11) 00:11:48.555 12528.167 - 12580.806: 94.9496% ( 10) 00:11:48.555 12580.806 - 12633.446: 95.0163% ( 9) 00:11:48.555 12633.446 - 12686.085: 95.0829% ( 9) 00:11:48.555 12686.085 - 12738.724: 95.1496% ( 9) 00:11:48.555 12738.724 - 12791.364: 95.2088% ( 8) 00:11:48.555 12791.364 - 12844.003: 95.2607% ( 7) 00:11:48.555 12844.003 - 12896.643: 95.3051% ( 6) 00:11:48.555 12896.643 - 12949.282: 95.3643% ( 8) 00:11:48.555 12949.282 - 13001.921: 95.4310% ( 9) 00:11:48.555 13001.921 - 13054.561: 95.4902% ( 8) 00:11:48.555 13054.561 - 13107.200: 95.5495% ( 8) 00:11:48.555 13107.200 - 13159.839: 95.6013% ( 7) 00:11:48.555 13159.839 - 13212.479: 95.6531% ( 7) 00:11:48.555 13212.479 - 13265.118: 95.6902% ( 5) 00:11:48.555 13265.118 - 13317.757: 95.7346% ( 6) 00:11:48.555 13317.757 - 13370.397: 95.7716% ( 5) 00:11:48.555 13370.397 - 13423.036: 95.8161% ( 6) 00:11:48.555 13423.036 - 13475.676: 95.8605% ( 6) 00:11:48.555 13475.676 - 13580.954: 95.9197% ( 8) 00:11:48.555 13580.954 - 13686.233: 95.9790% ( 8) 00:11:48.555 13686.233 - 13791.512: 96.0234% ( 6) 00:11:48.555 13791.512 - 13896.790: 96.0678% ( 6) 00:11:48.555 13896.790 - 14002.069: 96.1493% ( 11) 00:11:48.555 14002.069 - 14107.348: 96.2530% ( 14) 00:11:48.555 14107.348 - 14212.627: 96.3566% ( 14) 00:11:48.555 14212.627 - 14317.905: 96.4381% ( 11) 00:11:48.555 14317.905 - 14423.184: 96.5862% ( 20) 00:11:48.555 14423.184 - 14528.463: 96.7047% ( 16) 00:11:48.555 14528.463 - 14633.741: 96.8158% ( 15) 00:11:48.555 14633.741 - 14739.020: 96.9046% ( 12) 00:11:48.555 14739.020 - 14844.299: 97.0231% ( 16) 00:11:48.555 14844.299 - 14949.578: 97.1934% ( 23) 00:11:48.555 14949.578 - 15054.856: 97.3563% ( 22) 00:11:48.555 15054.856 - 15160.135: 97.5267% ( 23) 00:11:48.555 15160.135 - 15265.414: 97.6822% ( 21) 00:11:48.555 15265.414 - 15370.692: 97.8451% ( 22) 00:11:48.555 15370.692 - 15475.971: 98.0006% ( 21) 00:11:48.555 15475.971 - 15581.250: 98.1635% ( 22) 00:11:48.555 15581.250 - 15686.529: 98.3116% ( 20) 00:11:48.555 15686.529 - 15791.807: 98.4375% ( 17) 00:11:48.555 15791.807 - 15897.086: 98.5486% ( 15) 00:11:48.555 15897.086 - 16002.365: 98.6448% ( 13) 00:11:48.555 16002.365 - 16107.643: 98.7115% ( 9) 00:11:48.555 16107.643 - 16212.922: 98.7781% ( 9) 00:11:48.555 16212.922 - 16318.201: 98.8522% ( 10) 00:11:48.555 16318.201 - 16423.480: 98.8966% ( 6) 00:11:48.555 16423.480 - 16528.758: 98.9262% ( 4) 00:11:48.555 16528.758 - 16634.037: 98.9485% ( 3) 00:11:48.555 16634.037 - 16739.316: 98.9633% ( 2) 00:11:48.555 16739.316 - 16844.594: 98.9855% ( 3) 00:11:48.555 16844.594 - 16949.873: 99.0077% ( 3) 00:11:48.555 16949.873 - 17055.152: 99.0299% ( 3) 00:11:48.555 17055.152 - 17160.431: 99.0447% ( 2) 00:11:48.555 17160.431 - 17265.709: 99.0521% ( 1) 00:11:48.555 35584.206 - 35794.763: 99.1114% ( 8) 00:11:48.555 35794.763 - 36005.320: 99.1706% ( 8) 00:11:48.555 36005.320 - 36215.878: 99.2225% ( 7) 00:11:48.555 36215.878 - 36426.435: 99.2817% ( 8) 00:11:48.555 36426.435 - 36636.993: 99.3335% ( 7) 00:11:48.555 36636.993 - 36847.550: 99.3854% ( 7) 00:11:48.555 36847.550 - 37058.108: 99.4446% ( 8) 00:11:48.555 37058.108 - 37268.665: 99.4964% ( 7) 00:11:48.555 37268.665 - 37479.222: 99.5261% ( 4) 00:11:48.555 41690.371 - 41900.929: 99.5853% ( 8) 00:11:48.555 41900.929 - 42111.486: 99.6371% ( 7) 00:11:48.555 42111.486 - 42322.043: 99.6890% ( 7) 00:11:48.555 42322.043 - 42532.601: 99.7334% ( 6) 00:11:48.555 42532.601 - 42743.158: 99.7852% ( 7) 00:11:48.555 42743.158 - 42953.716: 99.8371% ( 7) 00:11:48.555 42953.716 - 43164.273: 99.9037% ( 9) 00:11:48.555 43164.273 - 43374.831: 99.9556% ( 7) 00:11:48.555 43374.831 - 43585.388: 100.0000% ( 6) 00:11:48.555 00:11:48.555 Latency histogram for PCIE (0000:00:12.0) NSID 3 from core 0: 00:11:48.555 ============================================================================== 00:11:48.555 Range in us Cumulative IO count 00:11:48.555 8001.182 - 8053.822: 0.1400% ( 19) 00:11:48.555 8053.822 - 8106.461: 0.8771% ( 100) 00:11:48.555 8106.461 - 8159.100: 2.2479% ( 186) 00:11:48.555 8159.100 - 8211.740: 4.3706% ( 288) 00:11:48.555 8211.740 - 8264.379: 7.0902% ( 369) 00:11:48.555 8264.379 - 8317.018: 10.4068% ( 450) 00:11:48.555 8317.018 - 8369.658: 13.6277% ( 437) 00:11:48.555 8369.658 - 8422.297: 17.4971% ( 525) 00:11:48.555 8422.297 - 8474.937: 21.5728% ( 553) 00:11:48.555 8474.937 - 8527.576: 25.8181% ( 576) 00:11:48.555 8527.576 - 8580.215: 30.2698% ( 604) 00:11:48.556 8580.215 - 8632.855: 34.8246% ( 618) 00:11:48.556 8632.855 - 8685.494: 39.6300% ( 652) 00:11:48.556 8685.494 - 8738.133: 44.3396% ( 639) 00:11:48.556 8738.133 - 8790.773: 48.9976% ( 632) 00:11:48.556 8790.773 - 8843.412: 53.9210% ( 668) 00:11:48.556 8843.412 - 8896.051: 58.5643% ( 630) 00:11:48.556 8896.051 - 8948.691: 62.8390% ( 580) 00:11:48.556 8948.691 - 9001.330: 66.6937% ( 523) 00:11:48.556 9001.330 - 9053.969: 69.8482% ( 428) 00:11:48.556 9053.969 - 9106.609: 72.4867% ( 358) 00:11:48.556 9106.609 - 9159.248: 74.6315% ( 291) 00:11:48.556 9159.248 - 9211.888: 76.4225% ( 243) 00:11:48.556 9211.888 - 9264.527: 77.9113% ( 202) 00:11:48.556 9264.527 - 9317.166: 79.1937% ( 174) 00:11:48.556 9317.166 - 9369.806: 80.3140% ( 152) 00:11:48.556 9369.806 - 9422.445: 81.2205% ( 123) 00:11:48.556 9422.445 - 9475.084: 81.8838% ( 90) 00:11:48.556 9475.084 - 9527.724: 82.5398% ( 89) 00:11:48.556 9527.724 - 9580.363: 83.1810% ( 87) 00:11:48.556 9580.363 - 9633.002: 83.8149% ( 86) 00:11:48.556 9633.002 - 9685.642: 84.4340% ( 84) 00:11:48.556 9685.642 - 9738.281: 85.0383% ( 82) 00:11:48.556 9738.281 - 9790.920: 85.5469% ( 69) 00:11:48.556 9790.920 - 9843.560: 85.9965% ( 61) 00:11:48.556 9843.560 - 9896.199: 86.4903% ( 67) 00:11:48.556 9896.199 - 9948.839: 86.9104% ( 57) 00:11:48.556 9948.839 - 10001.478: 87.3452% ( 59) 00:11:48.556 10001.478 - 10054.117: 87.7653% ( 57) 00:11:48.556 10054.117 - 10106.757: 88.1338% ( 50) 00:11:48.556 10106.757 - 10159.396: 88.5024% ( 50) 00:11:48.556 10159.396 - 10212.035: 88.8635% ( 49) 00:11:48.556 10212.035 - 10264.675: 89.2394% ( 51) 00:11:48.556 10264.675 - 10317.314: 89.5563% ( 43) 00:11:48.556 10317.314 - 10369.953: 89.8143% ( 35) 00:11:48.556 10369.953 - 10422.593: 90.0649% ( 34) 00:11:48.556 10422.593 - 10475.232: 90.2565% ( 26) 00:11:48.556 10475.232 - 10527.871: 90.4555% ( 27) 00:11:48.556 10527.871 - 10580.511: 90.6397% ( 25) 00:11:48.556 10580.511 - 10633.150: 90.8608% ( 30) 00:11:48.556 10633.150 - 10685.790: 91.0525% ( 26) 00:11:48.556 10685.790 - 10738.429: 91.2588% ( 28) 00:11:48.556 10738.429 - 10791.068: 91.4800% ( 30) 00:11:48.556 10791.068 - 10843.708: 91.7084% ( 31) 00:11:48.556 10843.708 - 10896.347: 91.9074% ( 27) 00:11:48.556 10896.347 - 10948.986: 92.1359% ( 31) 00:11:48.556 10948.986 - 11001.626: 92.3349% ( 27) 00:11:48.556 11001.626 - 11054.265: 92.4749% ( 19) 00:11:48.556 11054.265 - 11106.904: 92.6223% ( 20) 00:11:48.556 11106.904 - 11159.544: 92.7624% ( 19) 00:11:48.556 11159.544 - 11212.183: 92.9245% ( 22) 00:11:48.556 11212.183 - 11264.822: 93.0719% ( 20) 00:11:48.556 11264.822 - 11317.462: 93.1972% ( 17) 00:11:48.556 11317.462 - 11370.101: 93.2930% ( 13) 00:11:48.556 11370.101 - 11422.741: 93.4110% ( 16) 00:11:48.556 11422.741 - 11475.380: 93.5142% ( 14) 00:11:48.556 11475.380 - 11528.019: 93.6321% ( 16) 00:11:48.556 11528.019 - 11580.659: 93.7279% ( 13) 00:11:48.556 11580.659 - 11633.298: 93.8311% ( 14) 00:11:48.556 11633.298 - 11685.937: 93.9121% ( 11) 00:11:48.556 11685.937 - 11738.577: 93.9932% ( 11) 00:11:48.556 11738.577 - 11791.216: 94.0448% ( 7) 00:11:48.556 11791.216 - 11843.855: 94.0964% ( 7) 00:11:48.556 11843.855 - 11896.495: 94.1480% ( 7) 00:11:48.556 11896.495 - 11949.134: 94.1996% ( 7) 00:11:48.556 11949.134 - 12001.773: 94.2364% ( 5) 00:11:48.556 12001.773 - 12054.413: 94.2585% ( 3) 00:11:48.556 12054.413 - 12107.052: 94.2880% ( 4) 00:11:48.556 12107.052 - 12159.692: 94.3323% ( 6) 00:11:48.556 12159.692 - 12212.331: 94.3838% ( 7) 00:11:48.556 12212.331 - 12264.970: 94.4428% ( 8) 00:11:48.556 12264.970 - 12317.610: 94.5091% ( 9) 00:11:48.556 12317.610 - 12370.249: 94.5607% ( 7) 00:11:48.556 12370.249 - 12422.888: 94.6197% ( 8) 00:11:48.556 12422.888 - 12475.528: 94.6787% ( 8) 00:11:48.556 12475.528 - 12528.167: 94.7376% ( 8) 00:11:48.556 12528.167 - 12580.806: 94.7966% ( 8) 00:11:48.556 12580.806 - 12633.446: 94.8555% ( 8) 00:11:48.556 12633.446 - 12686.085: 94.9219% ( 9) 00:11:48.556 12686.085 - 12738.724: 94.9661% ( 6) 00:11:48.556 12738.724 - 12791.364: 95.0251% ( 8) 00:11:48.556 12791.364 - 12844.003: 95.1061% ( 11) 00:11:48.556 12844.003 - 12896.643: 95.1872% ( 11) 00:11:48.556 12896.643 - 12949.282: 95.2388% ( 7) 00:11:48.556 12949.282 - 13001.921: 95.3051% ( 9) 00:11:48.556 13001.921 - 13054.561: 95.3641% ( 8) 00:11:48.556 13054.561 - 13107.200: 95.4083% ( 6) 00:11:48.556 13107.200 - 13159.839: 95.4599% ( 7) 00:11:48.556 13159.839 - 13212.479: 95.4968% ( 5) 00:11:48.556 13212.479 - 13265.118: 95.5410% ( 6) 00:11:48.556 13265.118 - 13317.757: 95.5852% ( 6) 00:11:48.556 13317.757 - 13370.397: 95.6221% ( 5) 00:11:48.556 13370.397 - 13423.036: 95.6810% ( 8) 00:11:48.556 13423.036 - 13475.676: 95.7547% ( 10) 00:11:48.556 13475.676 - 13580.954: 95.8948% ( 19) 00:11:48.556 13580.954 - 13686.233: 96.0348% ( 19) 00:11:48.556 13686.233 - 13791.512: 96.1675% ( 18) 00:11:48.556 13791.512 - 13896.790: 96.2927% ( 17) 00:11:48.556 13896.790 - 14002.069: 96.4180% ( 17) 00:11:48.556 14002.069 - 14107.348: 96.5507% ( 18) 00:11:48.556 14107.348 - 14212.627: 96.6539% ( 14) 00:11:48.556 14212.627 - 14317.905: 96.7497% ( 13) 00:11:48.556 14317.905 - 14423.184: 96.8603% ( 15) 00:11:48.556 14423.184 - 14528.463: 96.9561% ( 13) 00:11:48.556 14528.463 - 14633.741: 97.0519% ( 13) 00:11:48.556 14633.741 - 14739.020: 97.1919% ( 19) 00:11:48.556 14739.020 - 14844.299: 97.3098% ( 16) 00:11:48.556 14844.299 - 14949.578: 97.4720% ( 22) 00:11:48.556 14949.578 - 15054.856: 97.6194% ( 20) 00:11:48.556 15054.856 - 15160.135: 97.8037% ( 25) 00:11:48.556 15160.135 - 15265.414: 97.9732% ( 23) 00:11:48.556 15265.414 - 15370.692: 98.1132% ( 19) 00:11:48.556 15370.692 - 15475.971: 98.2754% ( 22) 00:11:48.556 15475.971 - 15581.250: 98.4228% ( 20) 00:11:48.556 15581.250 - 15686.529: 98.5702% ( 20) 00:11:48.556 15686.529 - 15791.807: 98.6955% ( 17) 00:11:48.556 15791.807 - 15897.086: 98.7839% ( 12) 00:11:48.556 15897.086 - 16002.365: 98.8576% ( 10) 00:11:48.556 16002.365 - 16107.643: 98.9239% ( 9) 00:11:48.556 16107.643 - 16212.922: 98.9682% ( 6) 00:11:48.556 16212.922 - 16318.201: 98.9829% ( 2) 00:11:48.556 16318.201 - 16423.480: 99.0050% ( 3) 00:11:48.556 16423.480 - 16528.758: 99.0198% ( 2) 00:11:48.556 16528.758 - 16634.037: 99.0419% ( 3) 00:11:48.556 16634.037 - 16739.316: 99.0566% ( 2) 00:11:48.556 28846.368 - 29056.925: 99.0861% ( 4) 00:11:48.556 29056.925 - 29267.483: 99.1450% ( 8) 00:11:48.556 29267.483 - 29478.040: 99.2040% ( 8) 00:11:48.556 29478.040 - 29688.598: 99.2556% ( 7) 00:11:48.556 29688.598 - 29899.155: 99.3146% ( 8) 00:11:48.556 29899.155 - 30109.712: 99.3735% ( 8) 00:11:48.556 30109.712 - 30320.270: 99.4325% ( 8) 00:11:48.556 30320.270 - 30530.827: 99.4915% ( 8) 00:11:48.556 30530.827 - 30741.385: 99.5283% ( 5) 00:11:48.556 34741.976 - 34952.533: 99.5430% ( 2) 00:11:48.556 34952.533 - 35163.091: 99.6020% ( 8) 00:11:48.556 35163.091 - 35373.648: 99.6536% ( 7) 00:11:48.556 35373.648 - 35584.206: 99.7126% ( 8) 00:11:48.556 35584.206 - 35794.763: 99.7642% ( 7) 00:11:48.556 35794.763 - 36005.320: 99.8157% ( 7) 00:11:48.556 36005.320 - 36215.878: 99.8747% ( 8) 00:11:48.556 36215.878 - 36426.435: 99.9337% ( 8) 00:11:48.556 36426.435 - 36636.993: 99.9853% ( 7) 00:11:48.556 36636.993 - 36847.550: 100.0000% ( 2) 00:11:48.556 00:11:48.556 18:09:58 nvme.nvme_perf -- nvme/nvme.sh@23 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -q 128 -w write -o 12288 -t 1 -LL -i 0 00:11:49.938 Initializing NVMe Controllers 00:11:49.938 Attached to NVMe Controller at 0000:00:10.0 [1b36:0010] 00:11:49.938 Attached to NVMe Controller at 0000:00:11.0 [1b36:0010] 00:11:49.938 Attached to NVMe Controller at 0000:00:13.0 [1b36:0010] 00:11:49.938 Attached to NVMe Controller at 0000:00:12.0 [1b36:0010] 00:11:49.938 Associating PCIE (0000:00:10.0) NSID 1 with lcore 0 00:11:49.938 Associating PCIE (0000:00:11.0) NSID 1 with lcore 0 00:11:49.938 Associating PCIE (0000:00:13.0) NSID 1 with lcore 0 00:11:49.938 Associating PCIE (0000:00:12.0) NSID 1 with lcore 0 00:11:49.938 Associating PCIE (0000:00:12.0) NSID 2 with lcore 0 00:11:49.938 Associating PCIE (0000:00:12.0) NSID 3 with lcore 0 00:11:49.938 Initialization complete. Launching workers. 00:11:49.938 ======================================================== 00:11:49.938 Latency(us) 00:11:49.938 Device Information : IOPS MiB/s Average min max 00:11:49.938 PCIE (0000:00:10.0) NSID 1 from core 0: 10991.30 128.80 11672.16 7386.77 50149.98 00:11:49.938 PCIE (0000:00:11.0) NSID 1 from core 0: 10991.30 128.80 11654.00 7581.87 48638.42 00:11:49.938 PCIE (0000:00:13.0) NSID 1 from core 0: 10991.30 128.80 11636.24 7309.60 47993.22 00:11:49.938 PCIE (0000:00:12.0) NSID 1 from core 0: 10991.30 128.80 11618.82 7525.62 46805.42 00:11:49.938 PCIE (0000:00:12.0) NSID 2 from core 0: 10991.30 128.80 11600.96 7527.73 45580.48 00:11:49.938 PCIE (0000:00:12.0) NSID 3 from core 0: 10991.30 128.80 11583.37 7536.16 44283.92 00:11:49.938 ======================================================== 00:11:49.938 Total : 65947.83 772.83 11627.59 7309.60 50149.98 00:11:49.938 00:11:49.938 Summary latency data for PCIE (0000:00:10.0) NSID 1 from core 0: 00:11:49.938 ================================================================================= 00:11:49.938 1.00000% : 7843.264us 00:11:49.938 10.00000% : 8264.379us 00:11:49.938 25.00000% : 9053.969us 00:11:49.938 50.00000% : 9843.560us 00:11:49.938 75.00000% : 13896.790us 00:11:49.938 90.00000% : 16949.873us 00:11:49.938 95.00000% : 18423.775us 00:11:49.938 98.00000% : 20424.071us 00:11:49.938 99.00000% : 34952.533us 00:11:49.938 99.50000% : 47585.979us 00:11:49.938 99.90000% : 49691.553us 00:11:49.938 99.99000% : 50112.668us 00:11:49.938 99.99900% : 50323.226us 00:11:49.938 99.99990% : 50323.226us 00:11:49.938 99.99999% : 50323.226us 00:11:49.938 00:11:49.938 Summary latency data for PCIE (0000:00:11.0) NSID 1 from core 0: 00:11:49.938 ================================================================================= 00:11:49.938 1.00000% : 7895.904us 00:11:49.938 10.00000% : 8317.018us 00:11:49.938 25.00000% : 9053.969us 00:11:49.938 50.00000% : 9790.920us 00:11:49.938 75.00000% : 13896.790us 00:11:49.938 90.00000% : 16739.316us 00:11:49.938 95.00000% : 18318.496us 00:11:49.938 98.00000% : 19897.677us 00:11:49.938 99.00000% : 36215.878us 00:11:49.938 99.50000% : 46533.192us 00:11:49.938 99.90000% : 48217.651us 00:11:49.938 99.99000% : 48638.766us 00:11:49.938 99.99900% : 48638.766us 00:11:49.938 99.99990% : 48638.766us 00:11:49.938 99.99999% : 48638.766us 00:11:49.938 00:11:49.938 Summary latency data for PCIE (0000:00:13.0) NSID 1 from core 0: 00:11:49.938 ================================================================================= 00:11:49.938 1.00000% : 7895.904us 00:11:49.938 10.00000% : 8264.379us 00:11:49.938 25.00000% : 9106.609us 00:11:49.938 50.00000% : 9790.920us 00:11:49.938 75.00000% : 13896.790us 00:11:49.938 90.00000% : 17055.152us 00:11:49.938 95.00000% : 18107.939us 00:11:49.938 98.00000% : 19897.677us 00:11:49.938 99.00000% : 35794.763us 00:11:49.938 99.50000% : 45901.520us 00:11:49.938 99.90000% : 47585.979us 00:11:49.938 99.99000% : 48007.094us 00:11:49.938 99.99900% : 48007.094us 00:11:49.938 99.99990% : 48007.094us 00:11:49.938 99.99999% : 48007.094us 00:11:49.938 00:11:49.938 Summary latency data for PCIE (0000:00:12.0) NSID 1 from core 0: 00:11:49.938 ================================================================================= 00:11:49.938 1.00000% : 7895.904us 00:11:49.938 10.00000% : 8317.018us 00:11:49.938 25.00000% : 9106.609us 00:11:49.938 50.00000% : 9790.920us 00:11:49.938 75.00000% : 14002.069us 00:11:49.938 90.00000% : 16844.594us 00:11:49.938 95.00000% : 18107.939us 00:11:49.938 98.00000% : 19581.841us 00:11:49.938 99.00000% : 34741.976us 00:11:49.938 99.50000% : 44848.733us 00:11:49.938 99.90000% : 46533.192us 00:11:49.938 99.99000% : 46954.307us 00:11:49.938 99.99900% : 46954.307us 00:11:49.938 99.99990% : 46954.307us 00:11:49.938 99.99999% : 46954.307us 00:11:49.938 00:11:49.938 Summary latency data for PCIE (0000:00:12.0) NSID 2 from core 0: 00:11:49.938 ================================================================================= 00:11:49.938 1.00000% : 7895.904us 00:11:49.938 10.00000% : 8317.018us 00:11:49.938 25.00000% : 9053.969us 00:11:49.938 50.00000% : 9790.920us 00:11:49.938 75.00000% : 14107.348us 00:11:49.938 90.00000% : 16739.316us 00:11:49.938 95.00000% : 18107.939us 00:11:49.938 98.00000% : 19476.562us 00:11:49.938 99.00000% : 33478.631us 00:11:49.938 99.50000% : 43585.388us 00:11:49.938 99.90000% : 45269.847us 00:11:49.938 99.99000% : 45690.962us 00:11:49.938 99.99900% : 45690.962us 00:11:49.938 99.99990% : 45690.962us 00:11:49.938 99.99999% : 45690.962us 00:11:49.938 00:11:49.938 Summary latency data for PCIE (0000:00:12.0) NSID 3 from core 0: 00:11:49.938 ================================================================================= 00:11:49.938 1.00000% : 7843.264us 00:11:49.938 10.00000% : 8264.379us 00:11:49.938 25.00000% : 9106.609us 00:11:49.938 50.00000% : 9843.560us 00:11:49.938 75.00000% : 14002.069us 00:11:49.938 90.00000% : 16739.316us 00:11:49.938 95.00000% : 18107.939us 00:11:49.938 98.00000% : 19476.562us 00:11:49.938 99.00000% : 32846.959us 00:11:49.938 99.50000% : 41058.699us 00:11:49.938 99.90000% : 44006.503us 00:11:49.939 99.99000% : 44427.618us 00:11:49.939 99.99900% : 44427.618us 00:11:49.939 99.99990% : 44427.618us 00:11:49.939 99.99999% : 44427.618us 00:11:49.939 00:11:49.939 Latency histogram for PCIE (0000:00:10.0) NSID 1 from core 0: 00:11:49.939 ============================================================================== 00:11:49.939 Range in us Cumulative IO count 00:11:49.939 7369.510 - 7422.149: 0.0182% ( 2) 00:11:49.939 7422.149 - 7474.789: 0.0545% ( 4) 00:11:49.939 7474.789 - 7527.428: 0.1999% ( 16) 00:11:49.939 7527.428 - 7580.067: 0.2453% ( 5) 00:11:49.939 7580.067 - 7632.707: 0.3089% ( 7) 00:11:49.939 7632.707 - 7685.346: 0.3815% ( 8) 00:11:49.939 7685.346 - 7737.986: 0.5814% ( 22) 00:11:49.939 7737.986 - 7790.625: 0.9357% ( 39) 00:11:49.939 7790.625 - 7843.264: 1.4081% ( 52) 00:11:49.939 7843.264 - 7895.904: 2.1530% ( 82) 00:11:49.939 7895.904 - 7948.543: 2.9161% ( 84) 00:11:49.939 7948.543 - 8001.182: 3.7700% ( 94) 00:11:49.939 8001.182 - 8053.822: 4.6966% ( 102) 00:11:49.939 8053.822 - 8106.461: 6.3227% ( 179) 00:11:49.939 8106.461 - 8159.100: 7.6762% ( 149) 00:11:49.939 8159.100 - 8211.740: 9.2024% ( 168) 00:11:49.939 8211.740 - 8264.379: 10.3107% ( 122) 00:11:49.939 8264.379 - 8317.018: 11.3372% ( 113) 00:11:49.939 8317.018 - 8369.658: 12.4637% ( 124) 00:11:49.939 8369.658 - 8422.297: 13.3812% ( 101) 00:11:49.939 8422.297 - 8474.937: 14.6530% ( 140) 00:11:49.939 8474.937 - 8527.576: 16.1337% ( 163) 00:11:49.939 8527.576 - 8580.215: 17.7507% ( 178) 00:11:49.939 8580.215 - 8632.855: 18.7500% ( 110) 00:11:49.939 8632.855 - 8685.494: 20.0854% ( 147) 00:11:49.939 8685.494 - 8738.133: 20.7940% ( 78) 00:11:49.939 8738.133 - 8790.773: 21.4299% ( 70) 00:11:49.939 8790.773 - 8843.412: 22.3746% ( 104) 00:11:49.939 8843.412 - 8896.051: 23.1014% ( 80) 00:11:49.939 8896.051 - 8948.691: 23.8735% ( 85) 00:11:49.939 8948.691 - 9001.330: 24.6639% ( 87) 00:11:49.939 9001.330 - 9053.969: 26.1355% ( 162) 00:11:49.939 9053.969 - 9106.609: 27.5073% ( 151) 00:11:49.939 9106.609 - 9159.248: 28.9608% ( 160) 00:11:49.939 9159.248 - 9211.888: 30.3688% ( 155) 00:11:49.939 9211.888 - 9264.527: 31.9041% ( 169) 00:11:49.939 9264.527 - 9317.166: 33.9480% ( 225) 00:11:49.939 9317.166 - 9369.806: 35.9102% ( 216) 00:11:49.939 9369.806 - 9422.445: 38.1541% ( 247) 00:11:49.939 9422.445 - 9475.084: 39.8438% ( 186) 00:11:49.939 9475.084 - 9527.724: 41.3790% ( 169) 00:11:49.939 9527.724 - 9580.363: 42.9233% ( 170) 00:11:49.939 9580.363 - 9633.002: 44.7039% ( 196) 00:11:49.939 9633.002 - 9685.642: 46.3754% ( 184) 00:11:49.939 9685.642 - 9738.281: 47.6744% ( 143) 00:11:49.939 9738.281 - 9790.920: 49.2460% ( 173) 00:11:49.939 9790.920 - 9843.560: 50.7449% ( 165) 00:11:49.939 9843.560 - 9896.199: 52.4891% ( 192) 00:11:49.939 9896.199 - 9948.839: 54.2242% ( 191) 00:11:49.939 9948.839 - 10001.478: 55.6595% ( 158) 00:11:49.939 10001.478 - 10054.117: 56.6588% ( 110) 00:11:49.939 10054.117 - 10106.757: 57.6217% ( 106) 00:11:49.939 10106.757 - 10159.396: 58.4938% ( 96) 00:11:49.939 10159.396 - 10212.035: 59.1933% ( 77) 00:11:49.939 10212.035 - 10264.675: 59.7202% ( 58) 00:11:49.939 10264.675 - 10317.314: 60.2198% ( 55) 00:11:49.939 10317.314 - 10369.953: 60.7286% ( 56) 00:11:49.939 10369.953 - 10422.593: 61.2373% ( 56) 00:11:49.939 10422.593 - 10475.232: 61.6461% ( 45) 00:11:49.939 10475.232 - 10527.871: 62.0458% ( 44) 00:11:49.939 10527.871 - 10580.511: 62.4364% ( 43) 00:11:49.939 10580.511 - 10633.150: 62.8543% ( 46) 00:11:49.939 10633.150 - 10685.790: 63.4084% ( 61) 00:11:49.939 10685.790 - 10738.429: 63.6537% ( 27) 00:11:49.939 10738.429 - 10791.068: 63.9535% ( 33) 00:11:49.939 10791.068 - 10843.708: 64.1261% ( 19) 00:11:49.939 10843.708 - 10896.347: 64.2533% ( 14) 00:11:49.939 10896.347 - 10948.986: 64.4077% ( 17) 00:11:49.939 10948.986 - 11001.626: 64.5621% ( 17) 00:11:49.939 11001.626 - 11054.265: 64.7983% ( 26) 00:11:49.939 11054.265 - 11106.904: 64.9709% ( 19) 00:11:49.939 11106.904 - 11159.544: 65.1799% ( 23) 00:11:49.939 11159.544 - 11212.183: 65.4706% ( 32) 00:11:49.939 11212.183 - 11264.822: 65.5342% ( 7) 00:11:49.939 11264.822 - 11317.462: 65.6977% ( 18) 00:11:49.939 11317.462 - 11370.101: 65.9066% ( 23) 00:11:49.939 11370.101 - 11422.741: 66.1156% ( 23) 00:11:49.939 11422.741 - 11475.380: 66.4880% ( 41) 00:11:49.939 11475.380 - 11528.019: 66.6243% ( 15) 00:11:49.939 11528.019 - 11580.659: 66.8241% ( 22) 00:11:49.939 11580.659 - 11633.298: 66.9513% ( 14) 00:11:49.939 11633.298 - 11685.937: 67.1148% ( 18) 00:11:49.939 11685.937 - 11738.577: 67.2057% ( 10) 00:11:49.939 11738.577 - 11791.216: 67.3510% ( 16) 00:11:49.939 11791.216 - 11843.855: 67.4419% ( 10) 00:11:49.939 11843.855 - 11896.495: 67.6054% ( 18) 00:11:49.939 11896.495 - 11949.134: 67.8052% ( 22) 00:11:49.939 11949.134 - 12001.773: 67.9778% ( 19) 00:11:49.939 12001.773 - 12054.413: 68.1323% ( 17) 00:11:49.939 12054.413 - 12107.052: 68.2413% ( 12) 00:11:49.939 12107.052 - 12159.692: 68.3321% ( 10) 00:11:49.939 12159.692 - 12212.331: 68.4230% ( 10) 00:11:49.939 12212.331 - 12264.970: 68.5138% ( 10) 00:11:49.939 12264.970 - 12317.610: 68.5956% ( 9) 00:11:49.939 12317.610 - 12370.249: 68.8772% ( 31) 00:11:49.939 12370.249 - 12422.888: 69.0316% ( 17) 00:11:49.939 12422.888 - 12475.528: 69.2315% ( 22) 00:11:49.939 12475.528 - 12528.167: 69.3859% ( 17) 00:11:49.939 12528.167 - 12580.806: 69.6403% ( 28) 00:11:49.939 12580.806 - 12633.446: 69.8855% ( 27) 00:11:49.939 12633.446 - 12686.085: 70.1399% ( 28) 00:11:49.939 12686.085 - 12738.724: 70.4488% ( 34) 00:11:49.939 12738.724 - 12791.364: 70.5941% ( 16) 00:11:49.939 12791.364 - 12844.003: 70.7122% ( 13) 00:11:49.939 12844.003 - 12896.643: 70.8394% ( 14) 00:11:49.939 12896.643 - 12949.282: 70.9302% ( 10) 00:11:49.939 12949.282 - 13001.921: 71.0120% ( 9) 00:11:49.939 13001.921 - 13054.561: 71.2300% ( 24) 00:11:49.939 13054.561 - 13107.200: 71.3935% ( 18) 00:11:49.939 13107.200 - 13159.839: 71.5661% ( 19) 00:11:49.939 13159.839 - 13212.479: 71.7660% ( 22) 00:11:49.939 13212.479 - 13265.118: 72.0839% ( 35) 00:11:49.939 13265.118 - 13317.757: 72.1566% ( 8) 00:11:49.939 13317.757 - 13370.397: 72.3292% ( 19) 00:11:49.939 13370.397 - 13423.036: 72.4927% ( 18) 00:11:49.939 13423.036 - 13475.676: 72.7834% ( 32) 00:11:49.939 13475.676 - 13580.954: 73.4193% ( 70) 00:11:49.939 13580.954 - 13686.233: 73.9553% ( 59) 00:11:49.939 13686.233 - 13791.512: 74.6094% ( 72) 00:11:49.939 13791.512 - 13896.790: 75.1181% ( 56) 00:11:49.939 13896.790 - 14002.069: 75.3997% ( 31) 00:11:49.939 14002.069 - 14107.348: 75.7722% ( 41) 00:11:49.939 14107.348 - 14212.627: 76.2355% ( 51) 00:11:49.939 14212.627 - 14317.905: 76.5262% ( 32) 00:11:49.939 14317.905 - 14423.184: 76.9713% ( 49) 00:11:49.939 14423.184 - 14528.463: 77.3165% ( 38) 00:11:49.939 14528.463 - 14633.741: 77.8979% ( 64) 00:11:49.939 14633.741 - 14739.020: 78.5883% ( 76) 00:11:49.939 14739.020 - 14844.299: 79.3241% ( 81) 00:11:49.939 14844.299 - 14949.578: 79.7874% ( 51) 00:11:49.939 14949.578 - 15054.856: 80.1417% ( 39) 00:11:49.939 15054.856 - 15160.135: 80.8140% ( 74) 00:11:49.939 15160.135 - 15265.414: 81.3227% ( 56) 00:11:49.939 15265.414 - 15370.692: 81.8314% ( 56) 00:11:49.939 15370.692 - 15475.971: 82.5309% ( 77) 00:11:49.939 15475.971 - 15581.250: 83.2576% ( 80) 00:11:49.939 15581.250 - 15686.529: 83.8572% ( 66) 00:11:49.939 15686.529 - 15791.807: 84.5113% ( 72) 00:11:49.939 15791.807 - 15897.086: 85.2562% ( 82) 00:11:49.939 15897.086 - 16002.365: 86.0738% ( 90) 00:11:49.939 16002.365 - 16107.643: 86.7823% ( 78) 00:11:49.939 16107.643 - 16212.922: 87.3456% ( 62) 00:11:49.939 16212.922 - 16318.201: 87.7907% ( 49) 00:11:49.939 16318.201 - 16423.480: 88.1359% ( 38) 00:11:49.939 16423.480 - 16528.758: 88.4448% ( 34) 00:11:49.939 16528.758 - 16634.037: 88.7991% ( 39) 00:11:49.939 16634.037 - 16739.316: 89.2260% ( 47) 00:11:49.939 16739.316 - 16844.594: 89.5440% ( 35) 00:11:49.939 16844.594 - 16949.873: 90.0436% ( 55) 00:11:49.939 16949.873 - 17055.152: 90.5432% ( 55) 00:11:49.939 17055.152 - 17160.431: 90.9157% ( 41) 00:11:49.939 17160.431 - 17265.709: 91.2882% ( 41) 00:11:49.939 17265.709 - 17370.988: 91.7787% ( 54) 00:11:49.939 17370.988 - 17476.267: 92.2148% ( 48) 00:11:49.939 17476.267 - 17581.545: 92.5872% ( 41) 00:11:49.939 17581.545 - 17686.824: 92.9869% ( 44) 00:11:49.939 17686.824 - 17792.103: 93.3957% ( 45) 00:11:49.939 17792.103 - 17897.382: 93.7046% ( 34) 00:11:49.939 17897.382 - 18002.660: 94.0407% ( 37) 00:11:49.939 18002.660 - 18107.939: 94.3950% ( 39) 00:11:49.939 18107.939 - 18213.218: 94.6130% ( 24) 00:11:49.939 18213.218 - 18318.496: 94.7947% ( 20) 00:11:49.939 18318.496 - 18423.775: 95.0581% ( 29) 00:11:49.939 18423.775 - 18529.054: 95.3034% ( 27) 00:11:49.939 18529.054 - 18634.333: 95.4578% ( 17) 00:11:49.939 18634.333 - 18739.611: 95.6486% ( 21) 00:11:49.940 18739.611 - 18844.890: 95.8212% ( 19) 00:11:49.940 18844.890 - 18950.169: 96.0756% ( 28) 00:11:49.940 18950.169 - 19055.447: 96.3572% ( 31) 00:11:49.940 19055.447 - 19160.726: 96.8568% ( 55) 00:11:49.940 19160.726 - 19266.005: 97.1021% ( 27) 00:11:49.940 19266.005 - 19371.284: 97.2565% ( 17) 00:11:49.940 19371.284 - 19476.562: 97.3201% ( 7) 00:11:49.940 19476.562 - 19581.841: 97.5018% ( 20) 00:11:49.940 19581.841 - 19687.120: 97.5836% ( 9) 00:11:49.940 19687.120 - 19792.398: 97.6017% ( 2) 00:11:49.940 19792.398 - 19897.677: 97.6381% ( 4) 00:11:49.940 19897.677 - 20002.956: 97.6472% ( 1) 00:11:49.940 20002.956 - 20108.235: 97.7108% ( 7) 00:11:49.940 20108.235 - 20213.513: 97.8198% ( 12) 00:11:49.940 20213.513 - 20318.792: 97.9197% ( 11) 00:11:49.940 20318.792 - 20424.071: 98.0287% ( 12) 00:11:49.940 20424.071 - 20529.349: 98.0832% ( 6) 00:11:49.940 20529.349 - 20634.628: 98.1286% ( 5) 00:11:49.940 20634.628 - 20739.907: 98.2558% ( 14) 00:11:49.940 20739.907 - 20845.186: 98.3739% ( 13) 00:11:49.940 20845.186 - 20950.464: 98.5193% ( 16) 00:11:49.940 20950.464 - 21055.743: 98.5828% ( 7) 00:11:49.940 21055.743 - 21161.022: 98.6010% ( 2) 00:11:49.940 21161.022 - 21266.300: 98.6192% ( 2) 00:11:49.940 21266.300 - 21371.579: 98.6555% ( 4) 00:11:49.940 21371.579 - 21476.858: 98.7100% ( 6) 00:11:49.940 21476.858 - 21582.137: 98.7645% ( 6) 00:11:49.940 21582.137 - 21687.415: 98.7827% ( 2) 00:11:49.940 21687.415 - 21792.694: 98.8100% ( 3) 00:11:49.940 21792.694 - 21897.973: 98.8190% ( 1) 00:11:49.940 21897.973 - 22003.251: 98.8372% ( 2) 00:11:49.940 33899.746 - 34110.304: 98.8554% ( 2) 00:11:49.940 34110.304 - 34320.861: 98.8917% ( 4) 00:11:49.940 34320.861 - 34531.418: 98.9190% ( 3) 00:11:49.940 34531.418 - 34741.976: 98.9735% ( 6) 00:11:49.940 34741.976 - 34952.533: 99.0007% ( 3) 00:11:49.940 34952.533 - 35163.091: 99.0371% ( 4) 00:11:49.940 35163.091 - 35373.648: 99.0916% ( 6) 00:11:49.940 35373.648 - 35584.206: 99.1188% ( 3) 00:11:49.940 35584.206 - 35794.763: 99.1642% ( 5) 00:11:49.940 35794.763 - 36005.320: 99.2006% ( 4) 00:11:49.940 36005.320 - 36215.878: 99.2369% ( 4) 00:11:49.940 36215.878 - 36426.435: 99.2733% ( 4) 00:11:49.940 36426.435 - 36636.993: 99.3187% ( 5) 00:11:49.940 36636.993 - 36847.550: 99.3641% ( 5) 00:11:49.940 36847.550 - 37058.108: 99.4004% ( 4) 00:11:49.940 37058.108 - 37268.665: 99.4186% ( 2) 00:11:49.940 46954.307 - 47164.864: 99.4277% ( 1) 00:11:49.940 47164.864 - 47375.422: 99.4640% ( 4) 00:11:49.940 47375.422 - 47585.979: 99.5004% ( 4) 00:11:49.940 47585.979 - 47796.537: 99.5367% ( 4) 00:11:49.940 47796.537 - 48007.094: 99.5730% ( 4) 00:11:49.940 48007.094 - 48217.651: 99.6185% ( 5) 00:11:49.940 48217.651 - 48428.209: 99.6639% ( 5) 00:11:49.940 48428.209 - 48638.766: 99.6911% ( 3) 00:11:49.940 48638.766 - 48849.324: 99.7366% ( 5) 00:11:49.940 48849.324 - 49059.881: 99.7820% ( 5) 00:11:49.940 49059.881 - 49270.439: 99.8183% ( 4) 00:11:49.940 49270.439 - 49480.996: 99.8637% ( 5) 00:11:49.940 49480.996 - 49691.553: 99.9001% ( 4) 00:11:49.940 49691.553 - 49902.111: 99.9455% ( 5) 00:11:49.940 49902.111 - 50112.668: 99.9909% ( 5) 00:11:49.940 50112.668 - 50323.226: 100.0000% ( 1) 00:11:49.940 00:11:49.940 Latency histogram for PCIE (0000:00:11.0) NSID 1 from core 0: 00:11:49.940 ============================================================================== 00:11:49.940 Range in us Cumulative IO count 00:11:49.940 7580.067 - 7632.707: 0.0182% ( 2) 00:11:49.940 7632.707 - 7685.346: 0.0454% ( 3) 00:11:49.940 7685.346 - 7737.986: 0.0908% ( 5) 00:11:49.940 7737.986 - 7790.625: 0.2180% ( 14) 00:11:49.940 7790.625 - 7843.264: 0.6359% ( 46) 00:11:49.940 7843.264 - 7895.904: 1.0084% ( 41) 00:11:49.940 7895.904 - 7948.543: 1.5534% ( 60) 00:11:49.940 7948.543 - 8001.182: 2.3710% ( 90) 00:11:49.940 8001.182 - 8053.822: 4.3423% ( 217) 00:11:49.940 8053.822 - 8106.461: 5.4324% ( 120) 00:11:49.940 8106.461 - 8159.100: 7.2674% ( 202) 00:11:49.940 8159.100 - 8211.740: 8.8118% ( 170) 00:11:49.940 8211.740 - 8264.379: 9.7475% ( 103) 00:11:49.940 8264.379 - 8317.018: 11.3735% ( 179) 00:11:49.940 8317.018 - 8369.658: 12.9542% ( 174) 00:11:49.940 8369.658 - 8422.297: 14.2896% ( 147) 00:11:49.940 8422.297 - 8474.937: 15.3434% ( 116) 00:11:49.940 8474.937 - 8527.576: 17.2874% ( 214) 00:11:49.940 8527.576 - 8580.215: 18.4593% ( 129) 00:11:49.940 8580.215 - 8632.855: 19.4041% ( 104) 00:11:49.940 8632.855 - 8685.494: 20.1672% ( 84) 00:11:49.940 8685.494 - 8738.133: 21.1755% ( 111) 00:11:49.940 8738.133 - 8790.773: 21.7751% ( 66) 00:11:49.940 8790.773 - 8843.412: 22.2838% ( 56) 00:11:49.940 8843.412 - 8896.051: 22.9560% ( 74) 00:11:49.940 8896.051 - 8948.691: 23.7282% ( 85) 00:11:49.940 8948.691 - 9001.330: 24.6094% ( 97) 00:11:49.940 9001.330 - 9053.969: 25.7540% ( 126) 00:11:49.940 9053.969 - 9106.609: 27.2892% ( 169) 00:11:49.940 9106.609 - 9159.248: 28.3249% ( 114) 00:11:49.940 9159.248 - 9211.888: 29.7057% ( 152) 00:11:49.940 9211.888 - 9264.527: 31.1047% ( 154) 00:11:49.940 9264.527 - 9317.166: 32.5854% ( 163) 00:11:49.940 9317.166 - 9369.806: 34.0843% ( 165) 00:11:49.940 9369.806 - 9422.445: 36.0283% ( 214) 00:11:49.940 9422.445 - 9475.084: 38.1722% ( 236) 00:11:49.940 9475.084 - 9527.724: 40.5432% ( 261) 00:11:49.940 9527.724 - 9580.363: 42.8234% ( 251) 00:11:49.940 9580.363 - 9633.002: 45.1036% ( 251) 00:11:49.940 9633.002 - 9685.642: 47.5018% ( 264) 00:11:49.940 9685.642 - 9738.281: 49.4095% ( 210) 00:11:49.940 9738.281 - 9790.920: 51.2264% ( 200) 00:11:49.940 9790.920 - 9843.560: 52.7253% ( 165) 00:11:49.940 9843.560 - 9896.199: 54.0789% ( 149) 00:11:49.940 9896.199 - 9948.839: 55.2053% ( 124) 00:11:49.940 9948.839 - 10001.478: 56.1592% ( 105) 00:11:49.940 10001.478 - 10054.117: 57.0858% ( 102) 00:11:49.940 10054.117 - 10106.757: 57.8488% ( 84) 00:11:49.940 10106.757 - 10159.396: 58.4302% ( 64) 00:11:49.940 10159.396 - 10212.035: 59.1206% ( 76) 00:11:49.940 10212.035 - 10264.675: 59.9473% ( 91) 00:11:49.940 10264.675 - 10317.314: 60.5741% ( 69) 00:11:49.940 10317.314 - 10369.953: 61.2827% ( 78) 00:11:49.940 10369.953 - 10422.593: 62.0185% ( 81) 00:11:49.940 10422.593 - 10475.232: 62.5363% ( 57) 00:11:49.940 10475.232 - 10527.871: 62.9270% ( 43) 00:11:49.940 10527.871 - 10580.511: 63.2267% ( 33) 00:11:49.940 10580.511 - 10633.150: 63.5265% ( 33) 00:11:49.940 10633.150 - 10685.790: 63.7355% ( 23) 00:11:49.940 10685.790 - 10738.429: 63.9535% ( 24) 00:11:49.940 10738.429 - 10791.068: 64.1624% ( 23) 00:11:49.940 10791.068 - 10843.708: 64.4168% ( 28) 00:11:49.940 10843.708 - 10896.347: 64.6984% ( 31) 00:11:49.940 10896.347 - 10948.986: 64.8438% ( 16) 00:11:49.940 10948.986 - 11001.626: 65.0164% ( 19) 00:11:49.940 11001.626 - 11054.265: 65.2616% ( 27) 00:11:49.940 11054.265 - 11106.904: 65.5523% ( 32) 00:11:49.940 11106.904 - 11159.544: 65.6795% ( 14) 00:11:49.940 11159.544 - 11212.183: 65.7613% ( 9) 00:11:49.940 11212.183 - 11264.822: 65.8521% ( 10) 00:11:49.940 11264.822 - 11317.462: 66.0429% ( 21) 00:11:49.940 11317.462 - 11370.101: 66.1882% ( 16) 00:11:49.940 11370.101 - 11422.741: 66.3790% ( 21) 00:11:49.940 11422.741 - 11475.380: 66.5425% ( 18) 00:11:49.940 11475.380 - 11528.019: 66.7515% ( 23) 00:11:49.940 11528.019 - 11580.659: 66.9241% ( 19) 00:11:49.940 11580.659 - 11633.298: 67.0967% ( 19) 00:11:49.940 11633.298 - 11685.937: 67.3601% ( 29) 00:11:49.940 11685.937 - 11738.577: 67.5690% ( 23) 00:11:49.941 11738.577 - 11791.216: 67.8143% ( 27) 00:11:49.941 11791.216 - 11843.855: 68.0051% ( 21) 00:11:49.941 11843.855 - 11896.495: 68.2867% ( 31) 00:11:49.941 11896.495 - 11949.134: 68.4320% ( 16) 00:11:49.941 11949.134 - 12001.773: 68.6137% ( 20) 00:11:49.941 12001.773 - 12054.413: 68.7591% ( 16) 00:11:49.941 12054.413 - 12107.052: 68.9044% ( 16) 00:11:49.941 12107.052 - 12159.692: 69.0225% ( 13) 00:11:49.941 12159.692 - 12212.331: 69.1225% ( 11) 00:11:49.941 12212.331 - 12264.970: 69.2133% ( 10) 00:11:49.941 12264.970 - 12317.610: 69.3132% ( 11) 00:11:49.941 12317.610 - 12370.249: 69.4949% ( 20) 00:11:49.941 12370.249 - 12422.888: 69.5494% ( 6) 00:11:49.941 12422.888 - 12475.528: 69.5767% ( 3) 00:11:49.941 12475.528 - 12528.167: 69.6221% ( 5) 00:11:49.941 12528.167 - 12580.806: 69.7311% ( 12) 00:11:49.941 12580.806 - 12633.446: 69.8492% ( 13) 00:11:49.941 12633.446 - 12686.085: 70.0218% ( 19) 00:11:49.941 12686.085 - 12738.724: 70.2398% ( 24) 00:11:49.941 12738.724 - 12791.364: 70.4397% ( 22) 00:11:49.941 12791.364 - 12844.003: 70.6577% ( 24) 00:11:49.941 12844.003 - 12896.643: 70.8031% ( 16) 00:11:49.941 12896.643 - 12949.282: 70.9121% ( 12) 00:11:49.941 12949.282 - 13001.921: 71.0665% ( 17) 00:11:49.941 13001.921 - 13054.561: 71.2300% ( 18) 00:11:49.941 13054.561 - 13107.200: 71.3844% ( 17) 00:11:49.941 13107.200 - 13159.839: 71.5389% ( 17) 00:11:49.941 13159.839 - 13212.479: 71.6933% ( 17) 00:11:49.941 13212.479 - 13265.118: 72.1839% ( 54) 00:11:49.941 13265.118 - 13317.757: 72.4382% ( 28) 00:11:49.941 13317.757 - 13370.397: 72.6744% ( 26) 00:11:49.941 13370.397 - 13423.036: 72.8016% ( 14) 00:11:49.941 13423.036 - 13475.676: 72.9197% ( 13) 00:11:49.941 13475.676 - 13580.954: 73.4102% ( 54) 00:11:49.941 13580.954 - 13686.233: 73.8826% ( 52) 00:11:49.941 13686.233 - 13791.512: 74.5276% ( 71) 00:11:49.941 13791.512 - 13896.790: 75.2089% ( 75) 00:11:49.941 13896.790 - 14002.069: 75.7631% ( 61) 00:11:49.941 14002.069 - 14107.348: 76.1355% ( 41) 00:11:49.941 14107.348 - 14212.627: 76.3808% ( 27) 00:11:49.941 14212.627 - 14317.905: 76.6533% ( 30) 00:11:49.941 14317.905 - 14423.184: 76.8623% ( 23) 00:11:49.941 14423.184 - 14528.463: 77.1711% ( 34) 00:11:49.941 14528.463 - 14633.741: 77.4891% ( 35) 00:11:49.941 14633.741 - 14739.020: 78.0342% ( 60) 00:11:49.941 14739.020 - 14844.299: 78.8154% ( 86) 00:11:49.941 14844.299 - 14949.578: 79.6602% ( 93) 00:11:49.941 14949.578 - 15054.856: 80.4960% ( 92) 00:11:49.941 15054.856 - 15160.135: 81.0501% ( 61) 00:11:49.941 15160.135 - 15265.414: 81.4226% ( 41) 00:11:49.941 15265.414 - 15370.692: 81.8405% ( 46) 00:11:49.941 15370.692 - 15475.971: 82.3219% ( 53) 00:11:49.941 15475.971 - 15581.250: 82.8488% ( 58) 00:11:49.941 15581.250 - 15686.529: 83.4575% ( 67) 00:11:49.941 15686.529 - 15791.807: 84.1842% ( 80) 00:11:49.941 15791.807 - 15897.086: 84.6930% ( 56) 00:11:49.941 15897.086 - 16002.365: 85.1562% ( 51) 00:11:49.941 16002.365 - 16107.643: 85.6195% ( 51) 00:11:49.941 16107.643 - 16212.922: 86.5007% ( 97) 00:11:49.941 16212.922 - 16318.201: 87.3456% ( 93) 00:11:49.941 16318.201 - 16423.480: 87.8997% ( 61) 00:11:49.941 16423.480 - 16528.758: 88.5447% ( 71) 00:11:49.941 16528.758 - 16634.037: 89.3714% ( 91) 00:11:49.941 16634.037 - 16739.316: 90.0527% ( 75) 00:11:49.941 16739.316 - 16844.594: 90.5069% ( 50) 00:11:49.941 16844.594 - 16949.873: 90.8067% ( 33) 00:11:49.941 16949.873 - 17055.152: 91.0701% ( 29) 00:11:49.941 17055.152 - 17160.431: 91.3608% ( 32) 00:11:49.941 17160.431 - 17265.709: 91.7242% ( 40) 00:11:49.941 17265.709 - 17370.988: 92.0331% ( 34) 00:11:49.941 17370.988 - 17476.267: 92.2693% ( 26) 00:11:49.941 17476.267 - 17581.545: 92.5327% ( 29) 00:11:49.941 17581.545 - 17686.824: 92.8961% ( 40) 00:11:49.941 17686.824 - 17792.103: 93.1959% ( 33) 00:11:49.941 17792.103 - 17897.382: 93.4684% ( 30) 00:11:49.941 17897.382 - 18002.660: 93.8681% ( 44) 00:11:49.941 18002.660 - 18107.939: 94.4677% ( 66) 00:11:49.941 18107.939 - 18213.218: 94.8583% ( 43) 00:11:49.941 18213.218 - 18318.496: 95.2943% ( 48) 00:11:49.941 18318.496 - 18423.775: 95.6668% ( 41) 00:11:49.941 18423.775 - 18529.054: 95.8939% ( 25) 00:11:49.941 18529.054 - 18634.333: 96.0938% ( 22) 00:11:49.941 18634.333 - 18739.611: 96.2936% ( 22) 00:11:49.941 18739.611 - 18844.890: 96.5116% ( 24) 00:11:49.941 18844.890 - 18950.169: 96.7569% ( 27) 00:11:49.941 18950.169 - 19055.447: 97.0294% ( 30) 00:11:49.941 19055.447 - 19160.726: 97.1475% ( 13) 00:11:49.941 19160.726 - 19266.005: 97.4019% ( 28) 00:11:49.941 19266.005 - 19371.284: 97.6290% ( 25) 00:11:49.941 19371.284 - 19476.562: 97.7834% ( 17) 00:11:49.941 19476.562 - 19581.841: 97.8743% ( 10) 00:11:49.941 19581.841 - 19687.120: 97.9379% ( 7) 00:11:49.941 19687.120 - 19792.398: 97.9924% ( 6) 00:11:49.941 19792.398 - 19897.677: 98.0469% ( 6) 00:11:49.941 19897.677 - 20002.956: 98.1014% ( 6) 00:11:49.941 20002.956 - 20108.235: 98.1377% ( 4) 00:11:49.941 20108.235 - 20213.513: 98.1741% ( 4) 00:11:49.941 20213.513 - 20318.792: 98.2195% ( 5) 00:11:49.941 20318.792 - 20424.071: 98.2558% ( 4) 00:11:49.941 20634.628 - 20739.907: 98.2922% ( 4) 00:11:49.941 20739.907 - 20845.186: 98.3376% ( 5) 00:11:49.941 20845.186 - 20950.464: 98.3921% ( 6) 00:11:49.941 20950.464 - 21055.743: 98.4375% ( 5) 00:11:49.941 21055.743 - 21161.022: 98.4920% ( 6) 00:11:49.941 21161.022 - 21266.300: 98.5374% ( 5) 00:11:49.941 21266.300 - 21371.579: 98.5919% ( 6) 00:11:49.941 21371.579 - 21476.858: 98.6464% ( 6) 00:11:49.941 21476.858 - 21582.137: 98.6919% ( 5) 00:11:49.941 21582.137 - 21687.415: 98.7373% ( 5) 00:11:49.941 21687.415 - 21792.694: 98.7827% ( 5) 00:11:49.941 21792.694 - 21897.973: 98.8372% ( 6) 00:11:49.941 35373.648 - 35584.206: 98.8735% ( 4) 00:11:49.941 35584.206 - 35794.763: 98.9190% ( 5) 00:11:49.941 35794.763 - 36005.320: 98.9644% ( 5) 00:11:49.941 36005.320 - 36215.878: 99.0098% ( 5) 00:11:49.941 36215.878 - 36426.435: 99.0643% ( 6) 00:11:49.941 36426.435 - 36636.993: 99.1097% ( 5) 00:11:49.941 36636.993 - 36847.550: 99.1642% ( 6) 00:11:49.941 36847.550 - 37058.108: 99.2097% ( 5) 00:11:49.941 37058.108 - 37268.665: 99.2642% ( 6) 00:11:49.941 37268.665 - 37479.222: 99.3005% ( 4) 00:11:49.941 37479.222 - 37689.780: 99.3550% ( 6) 00:11:49.941 37689.780 - 37900.337: 99.4004% ( 5) 00:11:49.941 37900.337 - 38110.895: 99.4186% ( 2) 00:11:49.941 46112.077 - 46322.635: 99.4549% ( 4) 00:11:49.941 46322.635 - 46533.192: 99.5004% ( 5) 00:11:49.941 46533.192 - 46743.749: 99.5549% ( 6) 00:11:49.941 46743.749 - 46954.307: 99.6003% ( 5) 00:11:49.941 46954.307 - 47164.864: 99.6457% ( 5) 00:11:49.941 47164.864 - 47375.422: 99.6911% ( 5) 00:11:49.941 47375.422 - 47585.979: 99.7456% ( 6) 00:11:49.941 47585.979 - 47796.537: 99.7820% ( 4) 00:11:49.941 47796.537 - 48007.094: 99.8456% ( 7) 00:11:49.941 48007.094 - 48217.651: 99.9001% ( 6) 00:11:49.941 48217.651 - 48428.209: 99.9455% ( 5) 00:11:49.941 48428.209 - 48638.766: 100.0000% ( 6) 00:11:49.941 00:11:49.941 Latency histogram for PCIE (0000:00:13.0) NSID 1 from core 0: 00:11:49.941 ============================================================================== 00:11:49.941 Range in us Cumulative IO count 00:11:49.941 7264.231 - 7316.871: 0.0091% ( 1) 00:11:49.941 7369.510 - 7422.149: 0.0182% ( 1) 00:11:49.941 7474.789 - 7527.428: 0.0273% ( 1) 00:11:49.941 7580.067 - 7632.707: 0.0545% ( 3) 00:11:49.941 7632.707 - 7685.346: 0.1090% ( 6) 00:11:49.941 7685.346 - 7737.986: 0.2089% ( 11) 00:11:49.941 7737.986 - 7790.625: 0.3634% ( 17) 00:11:49.941 7790.625 - 7843.264: 0.7631% ( 44) 00:11:49.941 7843.264 - 7895.904: 1.2809% ( 57) 00:11:49.941 7895.904 - 7948.543: 1.9350% ( 72) 00:11:49.941 7948.543 - 8001.182: 2.9161% ( 108) 00:11:49.941 8001.182 - 8053.822: 4.2969% ( 152) 00:11:49.941 8053.822 - 8106.461: 5.5233% ( 135) 00:11:49.941 8106.461 - 8159.100: 6.9767% ( 160) 00:11:49.941 8159.100 - 8211.740: 8.6392% ( 183) 00:11:49.941 8211.740 - 8264.379: 10.1199% ( 163) 00:11:49.941 8264.379 - 8317.018: 11.7369% ( 178) 00:11:49.941 8317.018 - 8369.658: 13.0814% ( 148) 00:11:49.941 8369.658 - 8422.297: 14.2533% ( 129) 00:11:49.941 8422.297 - 8474.937: 15.4161% ( 128) 00:11:49.941 8474.937 - 8527.576: 16.5970% ( 130) 00:11:49.941 8527.576 - 8580.215: 17.6145% ( 112) 00:11:49.941 8580.215 - 8632.855: 18.5865% ( 107) 00:11:49.941 8632.855 - 8685.494: 19.7856% ( 132) 00:11:49.941 8685.494 - 8738.133: 20.7667% ( 108) 00:11:49.941 8738.133 - 8790.773: 21.5570% ( 87) 00:11:49.941 8790.773 - 8843.412: 22.1384% ( 64) 00:11:49.941 8843.412 - 8896.051: 22.9924% ( 94) 00:11:49.941 8896.051 - 8948.691: 23.8917% ( 99) 00:11:49.941 8948.691 - 9001.330: 24.4004% ( 56) 00:11:49.941 9001.330 - 9053.969: 24.9092% ( 56) 00:11:49.941 9053.969 - 9106.609: 25.7722% ( 95) 00:11:49.941 9106.609 - 9159.248: 26.5262% ( 83) 00:11:49.941 9159.248 - 9211.888: 27.5709% ( 115) 00:11:49.941 9211.888 - 9264.527: 28.8063% ( 136) 00:11:49.941 9264.527 - 9317.166: 30.8412% ( 224) 00:11:49.941 9317.166 - 9369.806: 32.5581% ( 189) 00:11:49.941 9369.806 - 9422.445: 34.4204% ( 205) 00:11:49.941 9422.445 - 9475.084: 36.2555% ( 202) 00:11:49.941 9475.084 - 9527.724: 38.6537% ( 264) 00:11:49.941 9527.724 - 9580.363: 41.3517% ( 297) 00:11:49.941 9580.363 - 9633.002: 44.1225% ( 305) 00:11:49.941 9633.002 - 9685.642: 46.4117% ( 252) 00:11:49.941 9685.642 - 9738.281: 48.3648% ( 215) 00:11:49.942 9738.281 - 9790.920: 50.7631% ( 264) 00:11:49.942 9790.920 - 9843.560: 52.7707% ( 221) 00:11:49.942 9843.560 - 9896.199: 54.5058% ( 191) 00:11:49.942 9896.199 - 9948.839: 55.7322% ( 135) 00:11:49.942 9948.839 - 10001.478: 57.0131% ( 141) 00:11:49.942 10001.478 - 10054.117: 58.2940% ( 141) 00:11:49.942 10054.117 - 10106.757: 59.3296% ( 114) 00:11:49.942 10106.757 - 10159.396: 60.2834% ( 105) 00:11:49.942 10159.396 - 10212.035: 61.0919% ( 89) 00:11:49.942 10212.035 - 10264.675: 61.7369% ( 71) 00:11:49.942 10264.675 - 10317.314: 62.3183% ( 64) 00:11:49.942 10317.314 - 10369.953: 62.7816% ( 51) 00:11:49.942 10369.953 - 10422.593: 63.0087% ( 25) 00:11:49.942 10422.593 - 10475.232: 63.1722% ( 18) 00:11:49.942 10475.232 - 10527.871: 63.3903% ( 24) 00:11:49.942 10527.871 - 10580.511: 63.5447% ( 17) 00:11:49.942 10580.511 - 10633.150: 63.7991% ( 28) 00:11:49.942 10633.150 - 10685.790: 64.1352% ( 37) 00:11:49.942 10685.790 - 10738.429: 64.3895% ( 28) 00:11:49.942 10738.429 - 10791.068: 64.5076% ( 13) 00:11:49.942 10791.068 - 10843.708: 64.6711% ( 18) 00:11:49.942 10843.708 - 10896.347: 64.9346% ( 29) 00:11:49.942 10896.347 - 10948.986: 65.2344% ( 33) 00:11:49.942 10948.986 - 11001.626: 65.5887% ( 39) 00:11:49.942 11001.626 - 11054.265: 65.8339% ( 27) 00:11:49.942 11054.265 - 11106.904: 66.0429% ( 23) 00:11:49.942 11106.904 - 11159.544: 66.1882% ( 16) 00:11:49.942 11159.544 - 11212.183: 66.3063% ( 13) 00:11:49.942 11212.183 - 11264.822: 66.5062% ( 22) 00:11:49.942 11264.822 - 11317.462: 66.7696% ( 29) 00:11:49.942 11317.462 - 11370.101: 67.0240% ( 28) 00:11:49.942 11370.101 - 11422.741: 67.2420% ( 24) 00:11:49.942 11422.741 - 11475.380: 67.5600% ( 35) 00:11:49.942 11475.380 - 11528.019: 67.7598% ( 22) 00:11:49.942 11528.019 - 11580.659: 68.0142% ( 28) 00:11:49.942 11580.659 - 11633.298: 68.1504% ( 15) 00:11:49.942 11633.298 - 11685.937: 68.3140% ( 18) 00:11:49.942 11685.937 - 11738.577: 68.5229% ( 23) 00:11:49.942 11738.577 - 11791.216: 68.8045% ( 31) 00:11:49.942 11791.216 - 11843.855: 68.9408% ( 15) 00:11:49.942 11843.855 - 11896.495: 69.0861% ( 16) 00:11:49.942 11896.495 - 11949.134: 69.2042% ( 13) 00:11:49.942 11949.134 - 12001.773: 69.2860% ( 9) 00:11:49.942 12001.773 - 12054.413: 69.4767% ( 21) 00:11:49.942 12054.413 - 12107.052: 69.5222% ( 5) 00:11:49.942 12107.052 - 12159.692: 69.6039% ( 9) 00:11:49.942 12159.692 - 12212.331: 69.7039% ( 11) 00:11:49.942 12212.331 - 12264.970: 69.7856% ( 9) 00:11:49.942 12264.970 - 12317.610: 69.8946% ( 12) 00:11:49.942 12317.610 - 12370.249: 70.0400% ( 16) 00:11:49.942 12370.249 - 12422.888: 70.1308% ( 10) 00:11:49.942 12422.888 - 12475.528: 70.2398% ( 12) 00:11:49.942 12475.528 - 12528.167: 70.3307% ( 10) 00:11:49.942 12528.167 - 12580.806: 70.4215% ( 10) 00:11:49.942 12580.806 - 12633.446: 70.5124% ( 10) 00:11:49.942 12633.446 - 12686.085: 70.6305% ( 13) 00:11:49.942 12686.085 - 12738.724: 70.7395% ( 12) 00:11:49.942 12738.724 - 12791.364: 70.7940% ( 6) 00:11:49.942 12791.364 - 12844.003: 70.8485% ( 6) 00:11:49.942 12844.003 - 12896.643: 70.9302% ( 9) 00:11:49.942 12896.643 - 12949.282: 71.0302% ( 11) 00:11:49.942 12949.282 - 13001.921: 71.1392% ( 12) 00:11:49.942 13001.921 - 13054.561: 71.3663% ( 25) 00:11:49.942 13054.561 - 13107.200: 71.4480% ( 9) 00:11:49.942 13107.200 - 13159.839: 71.5752% ( 14) 00:11:49.942 13159.839 - 13212.479: 71.7478% ( 19) 00:11:49.942 13212.479 - 13265.118: 71.9295% ( 20) 00:11:49.942 13265.118 - 13317.757: 72.1657% ( 26) 00:11:49.942 13317.757 - 13370.397: 72.3474% ( 20) 00:11:49.942 13370.397 - 13423.036: 72.5018% ( 17) 00:11:49.942 13423.036 - 13475.676: 72.7017% ( 22) 00:11:49.942 13475.676 - 13580.954: 73.2376% ( 59) 00:11:49.942 13580.954 - 13686.233: 74.0189% ( 86) 00:11:49.942 13686.233 - 13791.512: 74.6457% ( 69) 00:11:49.942 13791.512 - 13896.790: 75.1999% ( 61) 00:11:49.942 13896.790 - 14002.069: 75.5087% ( 34) 00:11:49.942 14002.069 - 14107.348: 75.7449% ( 26) 00:11:49.942 14107.348 - 14212.627: 75.8903% ( 16) 00:11:49.942 14212.627 - 14317.905: 76.1083% ( 24) 00:11:49.942 14317.905 - 14423.184: 76.3808% ( 30) 00:11:49.942 14423.184 - 14528.463: 76.8805% ( 55) 00:11:49.942 14528.463 - 14633.741: 77.4800% ( 66) 00:11:49.942 14633.741 - 14739.020: 78.4884% ( 111) 00:11:49.942 14739.020 - 14844.299: 79.5149% ( 113) 00:11:49.942 14844.299 - 14949.578: 79.9964% ( 53) 00:11:49.942 14949.578 - 15054.856: 80.4597% ( 51) 00:11:49.942 15054.856 - 15160.135: 80.9411% ( 53) 00:11:49.942 15160.135 - 15265.414: 81.6134% ( 74) 00:11:49.942 15265.414 - 15370.692: 82.2129% ( 66) 00:11:49.942 15370.692 - 15475.971: 82.6944% ( 53) 00:11:49.942 15475.971 - 15581.250: 83.3576% ( 73) 00:11:49.942 15581.250 - 15686.529: 84.0570% ( 77) 00:11:49.942 15686.529 - 15791.807: 84.5385% ( 53) 00:11:49.942 15791.807 - 15897.086: 84.9201% ( 42) 00:11:49.942 15897.086 - 16002.365: 85.2834% ( 40) 00:11:49.942 16002.365 - 16107.643: 85.6195% ( 37) 00:11:49.942 16107.643 - 16212.922: 86.2009% ( 64) 00:11:49.942 16212.922 - 16318.201: 86.9095% ( 78) 00:11:49.942 16318.201 - 16423.480: 87.3910% ( 53) 00:11:49.942 16423.480 - 16528.758: 87.8270% ( 48) 00:11:49.942 16528.758 - 16634.037: 88.3721% ( 60) 00:11:49.942 16634.037 - 16739.316: 88.8808% ( 56) 00:11:49.942 16739.316 - 16844.594: 89.2896% ( 45) 00:11:49.942 16844.594 - 16949.873: 89.7166% ( 47) 00:11:49.942 16949.873 - 17055.152: 90.2889% ( 63) 00:11:49.942 17055.152 - 17160.431: 91.0065% ( 79) 00:11:49.942 17160.431 - 17265.709: 91.7151% ( 78) 00:11:49.942 17265.709 - 17370.988: 92.4237% ( 78) 00:11:49.942 17370.988 - 17476.267: 92.9324% ( 56) 00:11:49.942 17476.267 - 17581.545: 93.3775% ( 49) 00:11:49.942 17581.545 - 17686.824: 93.8772% ( 55) 00:11:49.942 17686.824 - 17792.103: 94.2951% ( 46) 00:11:49.942 17792.103 - 17897.382: 94.5858% ( 32) 00:11:49.942 17897.382 - 18002.660: 94.8583% ( 30) 00:11:49.942 18002.660 - 18107.939: 95.0945% ( 26) 00:11:49.942 18107.939 - 18213.218: 95.3307% ( 26) 00:11:49.942 18213.218 - 18318.496: 95.4851% ( 17) 00:11:49.942 18318.496 - 18423.775: 95.6123% ( 14) 00:11:49.942 18423.775 - 18529.054: 95.7395% ( 14) 00:11:49.942 18529.054 - 18634.333: 95.8757% ( 15) 00:11:49.942 18634.333 - 18739.611: 95.9666% ( 10) 00:11:49.942 18739.611 - 18844.890: 96.0665% ( 11) 00:11:49.942 18844.890 - 18950.169: 96.1937% ( 14) 00:11:49.942 18950.169 - 19055.447: 96.3481% ( 17) 00:11:49.942 19055.447 - 19160.726: 96.5025% ( 17) 00:11:49.942 19160.726 - 19266.005: 96.8932% ( 43) 00:11:49.942 19266.005 - 19371.284: 97.4110% ( 57) 00:11:49.942 19371.284 - 19476.562: 97.6472% ( 26) 00:11:49.942 19476.562 - 19581.841: 97.7653% ( 13) 00:11:49.942 19581.841 - 19687.120: 97.8924% ( 14) 00:11:49.942 19687.120 - 19792.398: 97.9742% ( 9) 00:11:49.942 19792.398 - 19897.677: 98.0650% ( 10) 00:11:49.942 19897.677 - 20002.956: 98.1014% ( 4) 00:11:49.942 20002.956 - 20108.235: 98.2013% ( 11) 00:11:49.942 20108.235 - 20213.513: 98.2922% ( 10) 00:11:49.942 20213.513 - 20318.792: 98.3830% ( 10) 00:11:49.942 20318.792 - 20424.071: 98.4648% ( 9) 00:11:49.942 20424.071 - 20529.349: 98.5193% ( 6) 00:11:49.942 20529.349 - 20634.628: 98.5738% ( 6) 00:11:49.942 20634.628 - 20739.907: 98.6192% ( 5) 00:11:49.942 20739.907 - 20845.186: 98.6737% ( 6) 00:11:49.942 20845.186 - 20950.464: 98.7282% ( 6) 00:11:49.942 20950.464 - 21055.743: 98.7736% ( 5) 00:11:49.942 21055.743 - 21161.022: 98.8190% ( 5) 00:11:49.942 21161.022 - 21266.300: 98.8372% ( 2) 00:11:49.942 34741.976 - 34952.533: 98.8463% ( 1) 00:11:49.942 34952.533 - 35163.091: 98.8917% ( 5) 00:11:49.942 35163.091 - 35373.648: 98.9371% ( 5) 00:11:49.942 35373.648 - 35584.206: 98.9826% ( 5) 00:11:49.942 35584.206 - 35794.763: 99.0280% ( 5) 00:11:49.942 35794.763 - 36005.320: 99.0734% ( 5) 00:11:49.942 36005.320 - 36215.878: 99.1188% ( 5) 00:11:49.942 36215.878 - 36426.435: 99.1642% ( 5) 00:11:49.942 36426.435 - 36636.993: 99.2006% ( 4) 00:11:49.942 36636.993 - 36847.550: 99.2460% ( 5) 00:11:49.942 36847.550 - 37058.108: 99.2914% ( 5) 00:11:49.942 37058.108 - 37268.665: 99.3368% ( 5) 00:11:49.943 37268.665 - 37479.222: 99.3823% ( 5) 00:11:49.943 37479.222 - 37689.780: 99.4186% ( 4) 00:11:49.943 45480.405 - 45690.962: 99.4549% ( 4) 00:11:49.943 45690.962 - 45901.520: 99.5094% ( 6) 00:11:49.943 45901.520 - 46112.077: 99.5640% ( 6) 00:11:49.943 46112.077 - 46322.635: 99.6094% ( 5) 00:11:49.943 46322.635 - 46533.192: 99.6548% ( 5) 00:11:49.943 46533.192 - 46743.749: 99.7002% ( 5) 00:11:49.943 46743.749 - 46954.307: 99.7456% ( 5) 00:11:49.943 46954.307 - 47164.864: 99.8001% ( 6) 00:11:49.943 47164.864 - 47375.422: 99.8456% ( 5) 00:11:49.943 47375.422 - 47585.979: 99.9001% ( 6) 00:11:49.943 47585.979 - 47796.537: 99.9455% ( 5) 00:11:49.943 47796.537 - 48007.094: 100.0000% ( 6) 00:11:49.943 00:11:49.943 Latency histogram for PCIE (0000:00:12.0) NSID 1 from core 0: 00:11:49.943 ============================================================================== 00:11:49.943 Range in us Cumulative IO count 00:11:49.943 7474.789 - 7527.428: 0.0091% ( 1) 00:11:49.943 7580.067 - 7632.707: 0.0545% ( 5) 00:11:49.943 7632.707 - 7685.346: 0.1272% ( 8) 00:11:49.943 7685.346 - 7737.986: 0.2816% ( 17) 00:11:49.943 7737.986 - 7790.625: 0.4451% ( 18) 00:11:49.943 7790.625 - 7843.264: 0.8085% ( 40) 00:11:49.943 7843.264 - 7895.904: 1.3081% ( 55) 00:11:49.943 7895.904 - 7948.543: 1.8986% ( 65) 00:11:49.943 7948.543 - 8001.182: 2.7798% ( 97) 00:11:49.943 8001.182 - 8053.822: 3.9789% ( 132) 00:11:49.943 8053.822 - 8106.461: 5.0600% ( 119) 00:11:49.943 8106.461 - 8159.100: 6.5680% ( 166) 00:11:49.943 8159.100 - 8211.740: 8.2940% ( 190) 00:11:49.943 8211.740 - 8264.379: 9.7202% ( 157) 00:11:49.943 8264.379 - 8317.018: 11.5552% ( 202) 00:11:49.943 8317.018 - 8369.658: 12.9906% ( 158) 00:11:49.943 8369.658 - 8422.297: 14.5894% ( 176) 00:11:49.943 8422.297 - 8474.937: 15.7794% ( 131) 00:11:49.943 8474.937 - 8527.576: 16.9786% ( 132) 00:11:49.943 8527.576 - 8580.215: 18.3140% ( 147) 00:11:49.943 8580.215 - 8632.855: 19.1770% ( 95) 00:11:49.943 8632.855 - 8685.494: 20.1217% ( 104) 00:11:49.943 8685.494 - 8738.133: 21.0211% ( 99) 00:11:49.943 8738.133 - 8790.773: 21.7751% ( 83) 00:11:49.943 8790.773 - 8843.412: 22.3837% ( 67) 00:11:49.943 8843.412 - 8896.051: 22.8924% ( 56) 00:11:49.943 8896.051 - 8948.691: 23.5556% ( 73) 00:11:49.943 8948.691 - 9001.330: 24.2006% ( 71) 00:11:49.943 9001.330 - 9053.969: 24.8365% ( 70) 00:11:49.943 9053.969 - 9106.609: 25.6904% ( 94) 00:11:49.943 9106.609 - 9159.248: 26.5352% ( 93) 00:11:49.943 9159.248 - 9211.888: 27.5709% ( 114) 00:11:49.943 9211.888 - 9264.527: 28.8154% ( 137) 00:11:49.943 9264.527 - 9317.166: 30.2689% ( 160) 00:11:49.943 9317.166 - 9369.806: 32.1584% ( 208) 00:11:49.943 9369.806 - 9422.445: 34.5113% ( 259) 00:11:49.943 9422.445 - 9475.084: 37.0094% ( 275) 00:11:49.943 9475.084 - 9527.724: 39.6076% ( 286) 00:11:49.943 9527.724 - 9580.363: 42.6054% ( 330) 00:11:49.943 9580.363 - 9633.002: 44.9673% ( 260) 00:11:49.943 9633.002 - 9685.642: 47.0385% ( 228) 00:11:49.943 9685.642 - 9738.281: 49.1097% ( 228) 00:11:49.943 9738.281 - 9790.920: 50.9539% ( 203) 00:11:49.943 9790.920 - 9843.560: 52.6435% ( 186) 00:11:49.943 9843.560 - 9896.199: 54.0607% ( 156) 00:11:49.943 9896.199 - 9948.839: 55.3961% ( 147) 00:11:49.943 9948.839 - 10001.478: 56.4862% ( 120) 00:11:49.943 10001.478 - 10054.117: 57.5036% ( 112) 00:11:49.943 10054.117 - 10106.757: 58.3303% ( 91) 00:11:49.943 10106.757 - 10159.396: 59.1751% ( 93) 00:11:49.943 10159.396 - 10212.035: 59.8474% ( 74) 00:11:49.943 10212.035 - 10264.675: 60.3470% ( 55) 00:11:49.943 10264.675 - 10317.314: 61.0919% ( 82) 00:11:49.943 10317.314 - 10369.953: 61.6552% ( 62) 00:11:49.943 10369.953 - 10422.593: 61.9459% ( 32) 00:11:49.943 10422.593 - 10475.232: 62.2456% ( 33) 00:11:49.943 10475.232 - 10527.871: 62.6363% ( 43) 00:11:49.943 10527.871 - 10580.511: 63.0087% ( 41) 00:11:49.943 10580.511 - 10633.150: 63.2540% ( 27) 00:11:49.943 10633.150 - 10685.790: 63.6810% ( 47) 00:11:49.943 10685.790 - 10738.429: 64.2351% ( 61) 00:11:49.943 10738.429 - 10791.068: 64.5076% ( 30) 00:11:49.943 10791.068 - 10843.708: 64.6711% ( 18) 00:11:49.943 10843.708 - 10896.347: 64.8438% ( 19) 00:11:49.943 10896.347 - 10948.986: 64.9800% ( 15) 00:11:49.943 10948.986 - 11001.626: 65.1344% ( 17) 00:11:49.943 11001.626 - 11054.265: 65.5251% ( 43) 00:11:49.943 11054.265 - 11106.904: 65.7703% ( 27) 00:11:49.943 11106.904 - 11159.544: 65.9430% ( 19) 00:11:49.943 11159.544 - 11212.183: 66.1246% ( 20) 00:11:49.943 11212.183 - 11264.822: 66.3154% ( 21) 00:11:49.943 11264.822 - 11317.462: 66.4880% ( 19) 00:11:49.943 11317.462 - 11370.101: 66.5789% ( 10) 00:11:49.943 11370.101 - 11422.741: 66.6606% ( 9) 00:11:49.943 11422.741 - 11475.380: 66.7333% ( 8) 00:11:49.943 11475.380 - 11528.019: 66.8150% ( 9) 00:11:49.943 11528.019 - 11580.659: 66.8968% ( 9) 00:11:49.943 11580.659 - 11633.298: 66.9422% ( 5) 00:11:49.943 11633.298 - 11685.937: 66.9876% ( 5) 00:11:49.943 11685.937 - 11738.577: 67.0512% ( 7) 00:11:49.943 11738.577 - 11791.216: 67.1693% ( 13) 00:11:49.943 11791.216 - 11843.855: 67.3056% ( 15) 00:11:49.943 11843.855 - 11896.495: 67.4419% ( 15) 00:11:49.943 11896.495 - 11949.134: 67.6326% ( 21) 00:11:49.943 11949.134 - 12001.773: 68.0323% ( 44) 00:11:49.943 12001.773 - 12054.413: 68.2958% ( 29) 00:11:49.943 12054.413 - 12107.052: 68.7137% ( 46) 00:11:49.943 12107.052 - 12159.692: 69.0225% ( 34) 00:11:49.943 12159.692 - 12212.331: 69.3314% ( 34) 00:11:49.943 12212.331 - 12264.970: 69.6130% ( 31) 00:11:49.943 12264.970 - 12317.610: 69.9491% ( 37) 00:11:49.943 12317.610 - 12370.249: 70.1853% ( 26) 00:11:49.943 12370.249 - 12422.888: 70.5124% ( 36) 00:11:49.943 12422.888 - 12475.528: 70.6486% ( 15) 00:11:49.943 12475.528 - 12528.167: 70.7304% ( 9) 00:11:49.943 12528.167 - 12580.806: 70.8212% ( 10) 00:11:49.943 12580.806 - 12633.446: 70.8666% ( 5) 00:11:49.943 12633.446 - 12686.085: 70.9211% ( 6) 00:11:49.943 12686.085 - 12738.724: 70.9575% ( 4) 00:11:49.943 12738.724 - 12791.364: 71.0211% ( 7) 00:11:49.943 12791.364 - 12844.003: 71.1392% ( 13) 00:11:49.943 12844.003 - 12896.643: 71.2482% ( 12) 00:11:49.943 12896.643 - 12949.282: 71.3390% ( 10) 00:11:49.943 12949.282 - 13001.921: 71.4026% ( 7) 00:11:49.943 13001.921 - 13054.561: 71.4390% ( 4) 00:11:49.943 13054.561 - 13107.200: 71.4844% ( 5) 00:11:49.943 13107.200 - 13159.839: 71.6025% ( 13) 00:11:49.943 13159.839 - 13212.479: 71.7569% ( 17) 00:11:49.943 13212.479 - 13265.118: 71.9295% ( 19) 00:11:49.943 13265.118 - 13317.757: 72.1021% ( 19) 00:11:49.943 13317.757 - 13370.397: 72.2565% ( 17) 00:11:49.943 13370.397 - 13423.036: 72.4746% ( 24) 00:11:49.943 13423.036 - 13475.676: 72.6290% ( 17) 00:11:49.943 13475.676 - 13580.954: 73.0378% ( 45) 00:11:49.943 13580.954 - 13686.233: 73.5465% ( 56) 00:11:49.943 13686.233 - 13791.512: 74.3368% ( 87) 00:11:49.943 13791.512 - 13896.790: 74.7184% ( 42) 00:11:49.943 13896.790 - 14002.069: 75.1272% ( 45) 00:11:49.943 14002.069 - 14107.348: 75.4815% ( 39) 00:11:49.943 14107.348 - 14212.627: 75.9175% ( 48) 00:11:49.943 14212.627 - 14317.905: 76.7442% ( 91) 00:11:49.943 14317.905 - 14423.184: 77.3165% ( 63) 00:11:49.943 14423.184 - 14528.463: 77.7071% ( 43) 00:11:49.943 14528.463 - 14633.741: 78.0160% ( 34) 00:11:49.943 14633.741 - 14739.020: 78.2976% ( 31) 00:11:49.943 14739.020 - 14844.299: 78.7336% ( 48) 00:11:49.943 14844.299 - 14949.578: 79.4422% ( 78) 00:11:49.943 14949.578 - 15054.856: 79.8419% ( 44) 00:11:49.943 15054.856 - 15160.135: 80.4506% ( 67) 00:11:49.943 15160.135 - 15265.414: 81.1228% ( 74) 00:11:49.943 15265.414 - 15370.692: 81.9767% ( 94) 00:11:49.943 15370.692 - 15475.971: 82.8579% ( 97) 00:11:49.943 15475.971 - 15581.250: 83.5574% ( 77) 00:11:49.943 15581.250 - 15686.529: 84.2115% ( 72) 00:11:49.943 15686.529 - 15791.807: 84.5749% ( 40) 00:11:49.943 15791.807 - 15897.086: 85.0927% ( 57) 00:11:49.943 15897.086 - 16002.365: 85.6922% ( 66) 00:11:49.943 16002.365 - 16107.643: 86.2009% ( 56) 00:11:49.943 16107.643 - 16212.922: 86.6461% ( 49) 00:11:49.943 16212.922 - 16318.201: 87.4818% ( 92) 00:11:49.943 16318.201 - 16423.480: 88.2449% ( 84) 00:11:49.943 16423.480 - 16528.758: 88.7627% ( 57) 00:11:49.943 16528.758 - 16634.037: 89.4440% ( 75) 00:11:49.943 16634.037 - 16739.316: 89.8710% ( 47) 00:11:49.944 16739.316 - 16844.594: 90.4342% ( 62) 00:11:49.944 16844.594 - 16949.873: 90.8884% ( 50) 00:11:49.944 16949.873 - 17055.152: 91.4153% ( 58) 00:11:49.944 17055.152 - 17160.431: 91.7242% ( 34) 00:11:49.944 17160.431 - 17265.709: 92.1784% ( 50) 00:11:49.944 17265.709 - 17370.988: 92.8779% ( 77) 00:11:49.944 17370.988 - 17476.267: 93.2776% ( 44) 00:11:49.944 17476.267 - 17581.545: 93.6955% ( 46) 00:11:49.944 17581.545 - 17686.824: 94.0498% ( 39) 00:11:49.944 17686.824 - 17792.103: 94.3496% ( 33) 00:11:49.944 17792.103 - 17897.382: 94.5494% ( 22) 00:11:49.944 17897.382 - 18002.660: 94.8038% ( 28) 00:11:49.944 18002.660 - 18107.939: 95.0854% ( 31) 00:11:49.944 18107.939 - 18213.218: 95.3398% ( 28) 00:11:49.944 18213.218 - 18318.496: 95.6850% ( 38) 00:11:49.944 18318.496 - 18423.775: 95.8576% ( 19) 00:11:49.944 18423.775 - 18529.054: 96.0211% ( 18) 00:11:49.944 18529.054 - 18634.333: 96.2209% ( 22) 00:11:49.944 18634.333 - 18739.611: 96.3572% ( 15) 00:11:49.944 18739.611 - 18844.890: 96.4571% ( 11) 00:11:49.944 18844.890 - 18950.169: 96.5570% ( 11) 00:11:49.944 18950.169 - 19055.447: 96.6479% ( 10) 00:11:49.944 19055.447 - 19160.726: 96.7751% ( 14) 00:11:49.944 19160.726 - 19266.005: 96.9204% ( 16) 00:11:49.944 19266.005 - 19371.284: 97.3565% ( 48) 00:11:49.944 19371.284 - 19476.562: 97.7380% ( 42) 00:11:49.944 19476.562 - 19581.841: 98.0015% ( 29) 00:11:49.944 19581.841 - 19687.120: 98.1559% ( 17) 00:11:49.944 19687.120 - 19792.398: 98.3194% ( 18) 00:11:49.944 19792.398 - 19897.677: 98.4466% ( 14) 00:11:49.944 19897.677 - 20002.956: 98.5283% ( 9) 00:11:49.944 20002.956 - 20108.235: 98.5828% ( 6) 00:11:49.944 20108.235 - 20213.513: 98.6374% ( 6) 00:11:49.944 20213.513 - 20318.792: 98.6919% ( 6) 00:11:49.944 20318.792 - 20424.071: 98.7464% ( 6) 00:11:49.944 20424.071 - 20529.349: 98.8009% ( 6) 00:11:49.944 20529.349 - 20634.628: 98.8372% ( 4) 00:11:49.944 33899.746 - 34110.304: 98.9281% ( 10) 00:11:49.944 34110.304 - 34320.861: 98.9371% ( 1) 00:11:49.944 34320.861 - 34531.418: 98.9916% ( 6) 00:11:49.944 34531.418 - 34741.976: 99.0371% ( 5) 00:11:49.944 34741.976 - 34952.533: 99.0825% ( 5) 00:11:49.944 34952.533 - 35163.091: 99.1370% ( 6) 00:11:49.944 35163.091 - 35373.648: 99.1733% ( 4) 00:11:49.944 35373.648 - 35584.206: 99.2278% ( 6) 00:11:49.944 35584.206 - 35794.763: 99.2733% ( 5) 00:11:49.944 35794.763 - 36005.320: 99.3187% ( 5) 00:11:49.944 36005.320 - 36215.878: 99.3641% ( 5) 00:11:49.944 36215.878 - 36426.435: 99.4095% ( 5) 00:11:49.944 36426.435 - 36636.993: 99.4186% ( 1) 00:11:49.944 44217.060 - 44427.618: 99.4368% ( 2) 00:11:49.944 44427.618 - 44638.175: 99.4913% ( 6) 00:11:49.944 44638.175 - 44848.733: 99.5367% ( 5) 00:11:49.944 44848.733 - 45059.290: 99.5912% ( 6) 00:11:49.944 45059.290 - 45269.847: 99.6366% ( 5) 00:11:49.944 45269.847 - 45480.405: 99.6820% ( 5) 00:11:49.944 45480.405 - 45690.962: 99.7366% ( 6) 00:11:49.944 45690.962 - 45901.520: 99.7820% ( 5) 00:11:49.944 45901.520 - 46112.077: 99.8365% ( 6) 00:11:49.944 46112.077 - 46322.635: 99.8910% ( 6) 00:11:49.944 46322.635 - 46533.192: 99.9364% ( 5) 00:11:49.944 46533.192 - 46743.749: 99.9818% ( 5) 00:11:49.944 46743.749 - 46954.307: 100.0000% ( 2) 00:11:49.944 00:11:49.944 Latency histogram for PCIE (0000:00:12.0) NSID 2 from core 0: 00:11:49.944 ============================================================================== 00:11:49.944 Range in us Cumulative IO count 00:11:49.944 7527.428 - 7580.067: 0.0454% ( 5) 00:11:49.944 7580.067 - 7632.707: 0.1635% ( 13) 00:11:49.944 7632.707 - 7685.346: 0.2816% ( 13) 00:11:49.944 7685.346 - 7737.986: 0.4360% ( 17) 00:11:49.944 7737.986 - 7790.625: 0.6541% ( 24) 00:11:49.944 7790.625 - 7843.264: 0.9448% ( 32) 00:11:49.944 7843.264 - 7895.904: 1.4444% ( 55) 00:11:49.944 7895.904 - 7948.543: 2.0621% ( 68) 00:11:49.944 7948.543 - 8001.182: 2.9797% ( 101) 00:11:49.944 8001.182 - 8053.822: 4.0425% ( 117) 00:11:49.944 8053.822 - 8106.461: 5.2689% ( 135) 00:11:49.944 8106.461 - 8159.100: 6.5044% ( 136) 00:11:49.944 8159.100 - 8211.740: 7.9578% ( 160) 00:11:49.944 8211.740 - 8264.379: 9.5385% ( 174) 00:11:49.944 8264.379 - 8317.018: 11.2009% ( 183) 00:11:49.944 8317.018 - 8369.658: 12.8089% ( 177) 00:11:49.944 8369.658 - 8422.297: 14.6166% ( 199) 00:11:49.944 8422.297 - 8474.937: 15.7431% ( 124) 00:11:49.944 8474.937 - 8527.576: 16.9695% ( 135) 00:11:49.944 8527.576 - 8580.215: 17.9869% ( 112) 00:11:49.944 8580.215 - 8632.855: 18.8499% ( 95) 00:11:49.944 8632.855 - 8685.494: 19.9128% ( 117) 00:11:49.944 8685.494 - 8738.133: 20.7667% ( 94) 00:11:49.944 8738.133 - 8790.773: 21.5298% ( 84) 00:11:49.944 8790.773 - 8843.412: 22.3746% ( 93) 00:11:49.944 8843.412 - 8896.051: 23.1468% ( 85) 00:11:49.944 8896.051 - 8948.691: 23.8190% ( 74) 00:11:49.944 8948.691 - 9001.330: 24.6275% ( 89) 00:11:49.944 9001.330 - 9053.969: 25.5360% ( 100) 00:11:49.944 9053.969 - 9106.609: 26.5262% ( 109) 00:11:49.944 9106.609 - 9159.248: 27.4528% ( 102) 00:11:49.944 9159.248 - 9211.888: 28.6701% ( 134) 00:11:49.944 9211.888 - 9264.527: 30.0600% ( 153) 00:11:49.944 9264.527 - 9317.166: 31.7678% ( 188) 00:11:49.944 9317.166 - 9369.806: 33.9480% ( 240) 00:11:49.944 9369.806 - 9422.445: 35.9920% ( 225) 00:11:49.944 9422.445 - 9475.084: 38.0360% ( 225) 00:11:49.944 9475.084 - 9527.724: 40.6977% ( 293) 00:11:49.944 9527.724 - 9580.363: 43.0323% ( 257) 00:11:49.944 9580.363 - 9633.002: 45.0491% ( 222) 00:11:49.944 9633.002 - 9685.642: 47.1384% ( 230) 00:11:49.944 9685.642 - 9738.281: 49.2733% ( 235) 00:11:49.944 9738.281 - 9790.920: 50.8721% ( 176) 00:11:49.944 9790.920 - 9843.560: 52.4164% ( 170) 00:11:49.944 9843.560 - 9896.199: 53.4702% ( 116) 00:11:49.944 9896.199 - 9948.839: 54.6148% ( 126) 00:11:49.944 9948.839 - 10001.478: 55.4506% ( 92) 00:11:49.944 10001.478 - 10054.117: 56.3227% ( 96) 00:11:49.944 10054.117 - 10106.757: 57.1857% ( 95) 00:11:49.944 10106.757 - 10159.396: 57.9760% ( 87) 00:11:49.944 10159.396 - 10212.035: 58.6210% ( 71) 00:11:49.944 10212.035 - 10264.675: 59.4840% ( 95) 00:11:49.944 10264.675 - 10317.314: 60.4379% ( 105) 00:11:49.944 10317.314 - 10369.953: 61.0102% ( 63) 00:11:49.944 10369.953 - 10422.593: 61.6915% ( 75) 00:11:49.944 10422.593 - 10475.232: 62.1094% ( 46) 00:11:49.944 10475.232 - 10527.871: 62.4273% ( 35) 00:11:49.944 10527.871 - 10580.511: 62.7725% ( 38) 00:11:49.944 10580.511 - 10633.150: 63.1813% ( 45) 00:11:49.944 10633.150 - 10685.790: 63.4084% ( 25) 00:11:49.944 10685.790 - 10738.429: 63.5901% ( 20) 00:11:49.944 10738.429 - 10791.068: 63.8263% ( 26) 00:11:49.944 10791.068 - 10843.708: 64.2169% ( 43) 00:11:49.944 10843.708 - 10896.347: 64.3441% ( 14) 00:11:49.944 10896.347 - 10948.986: 64.4804% ( 15) 00:11:49.944 10948.986 - 11001.626: 64.5985% ( 13) 00:11:49.944 11001.626 - 11054.265: 64.7347% ( 15) 00:11:49.944 11054.265 - 11106.904: 64.9255% ( 21) 00:11:49.944 11106.904 - 11159.544: 65.1072% ( 20) 00:11:49.944 11159.544 - 11212.183: 65.4070% ( 33) 00:11:49.944 11212.183 - 11264.822: 65.6977% ( 32) 00:11:49.944 11264.822 - 11317.462: 66.0338% ( 37) 00:11:49.944 11317.462 - 11370.101: 66.3608% ( 36) 00:11:49.944 11370.101 - 11422.741: 66.4971% ( 15) 00:11:49.944 11422.741 - 11475.380: 66.5879% ( 10) 00:11:49.944 11475.380 - 11528.019: 66.7060% ( 13) 00:11:49.944 11528.019 - 11580.659: 66.8332% ( 14) 00:11:49.944 11580.659 - 11633.298: 66.9604% ( 14) 00:11:49.944 11633.298 - 11685.937: 67.0149% ( 6) 00:11:49.944 11685.937 - 11738.577: 67.0967% ( 9) 00:11:49.944 11738.577 - 11791.216: 67.1784% ( 9) 00:11:49.944 11791.216 - 11843.855: 67.2783% ( 11) 00:11:49.944 11843.855 - 11896.495: 67.4237% ( 16) 00:11:49.944 11896.495 - 11949.134: 67.5327% ( 12) 00:11:49.944 11949.134 - 12001.773: 67.7598% ( 25) 00:11:49.944 12001.773 - 12054.413: 67.9324% ( 19) 00:11:49.944 12054.413 - 12107.052: 68.0596% ( 14) 00:11:49.944 12107.052 - 12159.692: 68.2322% ( 19) 00:11:49.944 12159.692 - 12212.331: 68.3957% ( 18) 00:11:49.944 12212.331 - 12264.970: 68.6319% ( 26) 00:11:49.945 12264.970 - 12317.610: 68.8045% ( 19) 00:11:49.945 12317.610 - 12370.249: 68.9862% ( 20) 00:11:49.945 12370.249 - 12422.888: 69.2769% ( 32) 00:11:49.945 12422.888 - 12475.528: 69.5676% ( 32) 00:11:49.945 12475.528 - 12528.167: 69.7856% ( 24) 00:11:49.945 12528.167 - 12580.806: 69.9400% ( 17) 00:11:49.945 12580.806 - 12633.446: 70.1581% ( 24) 00:11:49.945 12633.446 - 12686.085: 70.4033% ( 27) 00:11:49.945 12686.085 - 12738.724: 70.6305% ( 25) 00:11:49.945 12738.724 - 12791.364: 70.8212% ( 21) 00:11:49.945 12791.364 - 12844.003: 71.0665% ( 27) 00:11:49.945 12844.003 - 12896.643: 71.4299% ( 40) 00:11:49.945 12896.643 - 12949.282: 71.7751% ( 38) 00:11:49.945 12949.282 - 13001.921: 71.9023% ( 14) 00:11:49.945 13001.921 - 13054.561: 72.0113% ( 12) 00:11:49.945 13054.561 - 13107.200: 72.1203% ( 12) 00:11:49.945 13107.200 - 13159.839: 72.2384% ( 13) 00:11:49.945 13159.839 - 13212.479: 72.4110% ( 19) 00:11:49.945 13212.479 - 13265.118: 72.5745% ( 18) 00:11:49.945 13265.118 - 13317.757: 72.7380% ( 18) 00:11:49.945 13317.757 - 13370.397: 72.8743% ( 15) 00:11:49.945 13370.397 - 13423.036: 73.0469% ( 19) 00:11:49.945 13423.036 - 13475.676: 73.2195% ( 19) 00:11:49.945 13475.676 - 13580.954: 73.5011% ( 31) 00:11:49.945 13580.954 - 13686.233: 73.7918% ( 32) 00:11:49.945 13686.233 - 13791.512: 74.1370% ( 38) 00:11:49.945 13791.512 - 13896.790: 74.3914% ( 28) 00:11:49.945 13896.790 - 14002.069: 74.6366% ( 27) 00:11:49.945 14002.069 - 14107.348: 75.1999% ( 62) 00:11:49.945 14107.348 - 14212.627: 75.8358% ( 70) 00:11:49.945 14212.627 - 14317.905: 76.3899% ( 61) 00:11:49.945 14317.905 - 14423.184: 76.9350% ( 60) 00:11:49.945 14423.184 - 14528.463: 77.3074% ( 41) 00:11:49.945 14528.463 - 14633.741: 77.7889% ( 53) 00:11:49.945 14633.741 - 14739.020: 78.2431% ( 50) 00:11:49.945 14739.020 - 14844.299: 78.8972% ( 72) 00:11:49.945 14844.299 - 14949.578: 79.5422% ( 71) 00:11:49.945 14949.578 - 15054.856: 80.2961% ( 83) 00:11:49.945 15054.856 - 15160.135: 80.9048% ( 67) 00:11:49.945 15160.135 - 15265.414: 81.5225% ( 68) 00:11:49.945 15265.414 - 15370.692: 82.1584% ( 70) 00:11:49.945 15370.692 - 15475.971: 82.6762% ( 57) 00:11:49.945 15475.971 - 15581.250: 83.0941% ( 46) 00:11:49.945 15581.250 - 15686.529: 83.6573% ( 62) 00:11:49.945 15686.529 - 15791.807: 84.5113% ( 94) 00:11:49.945 15791.807 - 15897.086: 85.0382% ( 58) 00:11:49.945 15897.086 - 16002.365: 85.7740% ( 81) 00:11:49.945 16002.365 - 16107.643: 86.4281% ( 72) 00:11:49.945 16107.643 - 16212.922: 87.2184% ( 87) 00:11:49.945 16212.922 - 16318.201: 88.1086% ( 98) 00:11:49.945 16318.201 - 16423.480: 88.5901% ( 53) 00:11:49.945 16423.480 - 16528.758: 89.1352% ( 60) 00:11:49.945 16528.758 - 16634.037: 89.9346% ( 88) 00:11:49.945 16634.037 - 16739.316: 90.4887% ( 61) 00:11:49.945 16739.316 - 16844.594: 91.1519% ( 73) 00:11:49.945 16844.594 - 16949.873: 91.7151% ( 62) 00:11:49.945 16949.873 - 17055.152: 92.1602% ( 49) 00:11:49.945 17055.152 - 17160.431: 92.5781% ( 46) 00:11:49.945 17160.431 - 17265.709: 93.1686% ( 65) 00:11:49.945 17265.709 - 17370.988: 93.3775% ( 23) 00:11:49.945 17370.988 - 17476.267: 93.5501% ( 19) 00:11:49.945 17476.267 - 17581.545: 93.8408% ( 32) 00:11:49.945 17581.545 - 17686.824: 94.1497% ( 34) 00:11:49.945 17686.824 - 17792.103: 94.3859% ( 26) 00:11:49.945 17792.103 - 17897.382: 94.6039% ( 24) 00:11:49.945 17897.382 - 18002.660: 94.8038% ( 22) 00:11:49.945 18002.660 - 18107.939: 95.0309% ( 25) 00:11:49.945 18107.939 - 18213.218: 95.4124% ( 42) 00:11:49.945 18213.218 - 18318.496: 95.6759% ( 29) 00:11:49.945 18318.496 - 18423.775: 95.8485% ( 19) 00:11:49.945 18423.775 - 18529.054: 95.9757% ( 14) 00:11:49.945 18529.054 - 18634.333: 96.0483% ( 8) 00:11:49.945 18634.333 - 18739.611: 96.1119% ( 7) 00:11:49.945 18739.611 - 18844.890: 96.1755% ( 7) 00:11:49.945 18844.890 - 18950.169: 96.3118% ( 15) 00:11:49.945 18950.169 - 19055.447: 96.4844% ( 19) 00:11:49.945 19055.447 - 19160.726: 97.1475% ( 73) 00:11:49.945 19160.726 - 19266.005: 97.6381% ( 54) 00:11:49.945 19266.005 - 19371.284: 97.9106% ( 30) 00:11:49.945 19371.284 - 19476.562: 98.1741% ( 29) 00:11:49.945 19476.562 - 19581.841: 98.3739% ( 22) 00:11:49.945 19581.841 - 19687.120: 98.4829% ( 12) 00:11:49.945 19687.120 - 19792.398: 98.5828% ( 11) 00:11:49.945 19792.398 - 19897.677: 98.6555% ( 8) 00:11:49.945 19897.677 - 20002.956: 98.7100% ( 6) 00:11:49.945 20002.956 - 20108.235: 98.7555% ( 5) 00:11:49.945 20108.235 - 20213.513: 98.8100% ( 6) 00:11:49.945 20213.513 - 20318.792: 98.8372% ( 3) 00:11:49.945 32636.402 - 32846.959: 98.8645% ( 3) 00:11:49.945 32846.959 - 33057.516: 98.9099% ( 5) 00:11:49.945 33057.516 - 33268.074: 98.9553% ( 5) 00:11:49.945 33268.074 - 33478.631: 99.0098% ( 6) 00:11:49.945 33478.631 - 33689.189: 99.0552% ( 5) 00:11:49.945 33689.189 - 33899.746: 99.1007% ( 5) 00:11:49.945 33899.746 - 34110.304: 99.1461% ( 5) 00:11:49.945 34110.304 - 34320.861: 99.1915% ( 5) 00:11:49.945 34320.861 - 34531.418: 99.2460% ( 6) 00:11:49.945 34531.418 - 34741.976: 99.2823% ( 4) 00:11:49.945 34741.976 - 34952.533: 99.3278% ( 5) 00:11:49.945 34952.533 - 35163.091: 99.3732% ( 5) 00:11:49.945 35163.091 - 35373.648: 99.4186% ( 5) 00:11:49.945 42953.716 - 43164.273: 99.4459% ( 3) 00:11:49.945 43164.273 - 43374.831: 99.4913% ( 5) 00:11:49.945 43374.831 - 43585.388: 99.5458% ( 6) 00:11:49.945 43585.388 - 43795.945: 99.5912% ( 5) 00:11:49.945 43795.945 - 44006.503: 99.6457% ( 6) 00:11:49.945 44006.503 - 44217.060: 99.6911% ( 5) 00:11:49.945 44217.060 - 44427.618: 99.7366% ( 5) 00:11:49.945 44427.618 - 44638.175: 99.7911% ( 6) 00:11:49.945 44638.175 - 44848.733: 99.8274% ( 4) 00:11:49.945 44848.733 - 45059.290: 99.8728% ( 5) 00:11:49.945 45059.290 - 45269.847: 99.9182% ( 5) 00:11:49.945 45269.847 - 45480.405: 99.9727% ( 6) 00:11:49.945 45480.405 - 45690.962: 100.0000% ( 3) 00:11:49.945 00:11:49.945 Latency histogram for PCIE (0000:00:12.0) NSID 3 from core 0: 00:11:49.945 ============================================================================== 00:11:49.945 Range in us Cumulative IO count 00:11:49.945 7527.428 - 7580.067: 0.0091% ( 1) 00:11:49.945 7580.067 - 7632.707: 0.0908% ( 9) 00:11:49.945 7632.707 - 7685.346: 0.2089% ( 13) 00:11:49.945 7685.346 - 7737.986: 0.4906% ( 31) 00:11:49.945 7737.986 - 7790.625: 0.9720% ( 53) 00:11:49.945 7790.625 - 7843.264: 1.4262% ( 50) 00:11:49.945 7843.264 - 7895.904: 2.1893% ( 84) 00:11:49.945 7895.904 - 7948.543: 2.9978% ( 89) 00:11:49.945 7948.543 - 8001.182: 3.7427% ( 82) 00:11:49.945 8001.182 - 8053.822: 4.7238% ( 108) 00:11:49.945 8053.822 - 8106.461: 6.0865% ( 150) 00:11:49.945 8106.461 - 8159.100: 7.3583% ( 140) 00:11:49.945 8159.100 - 8211.740: 8.6846% ( 146) 00:11:49.945 8211.740 - 8264.379: 10.3379% ( 182) 00:11:49.945 8264.379 - 8317.018: 11.3645% ( 113) 00:11:49.945 8317.018 - 8369.658: 12.3456% ( 108) 00:11:49.945 8369.658 - 8422.297: 13.3539% ( 111) 00:11:49.945 8422.297 - 8474.937: 14.5440% ( 131) 00:11:49.945 8474.937 - 8527.576: 15.8249% ( 141) 00:11:49.945 8527.576 - 8580.215: 17.1148% ( 142) 00:11:49.945 8580.215 - 8632.855: 18.4048% ( 142) 00:11:49.945 8632.855 - 8685.494: 19.5585% ( 127) 00:11:49.945 8685.494 - 8738.133: 20.3216% ( 84) 00:11:49.945 8738.133 - 8790.773: 21.0483% ( 80) 00:11:49.945 8790.773 - 8843.412: 21.5298% ( 53) 00:11:49.945 8843.412 - 8896.051: 22.1839% ( 72) 00:11:49.945 8896.051 - 8948.691: 23.0105% ( 91) 00:11:49.945 8948.691 - 9001.330: 23.8735% ( 95) 00:11:49.945 9001.330 - 9053.969: 24.8910% ( 112) 00:11:49.945 9053.969 - 9106.609: 26.1991% ( 144) 00:11:49.945 9106.609 - 9159.248: 27.3347% ( 125) 00:11:49.945 9159.248 - 9211.888: 28.4884% ( 127) 00:11:49.945 9211.888 - 9264.527: 29.9237% ( 158) 00:11:49.945 9264.527 - 9317.166: 31.8223% ( 209) 00:11:49.945 9317.166 - 9369.806: 33.6573% ( 202) 00:11:49.945 9369.806 - 9422.445: 35.7831% ( 234) 00:11:49.945 9422.445 - 9475.084: 38.1904% ( 265) 00:11:49.945 9475.084 - 9527.724: 40.3706% ( 240) 00:11:49.945 9527.724 - 9580.363: 42.4328% ( 227) 00:11:49.945 9580.363 - 9633.002: 44.5131% ( 229) 00:11:49.945 9633.002 - 9685.642: 46.3299% ( 200) 00:11:49.945 9685.642 - 9738.281: 48.0560% ( 190) 00:11:49.945 9738.281 - 9790.920: 49.8637% ( 199) 00:11:49.945 9790.920 - 9843.560: 51.5807% ( 189) 00:11:49.945 9843.560 - 9896.199: 53.2613% ( 185) 00:11:49.945 9896.199 - 9948.839: 54.7693% ( 166) 00:11:49.945 9948.839 - 10001.478: 56.5952% ( 201) 00:11:49.945 10001.478 - 10054.117: 57.8943% ( 143) 00:11:49.945 10054.117 - 10106.757: 58.9935% ( 121) 00:11:49.945 10106.757 - 10159.396: 59.9110% ( 101) 00:11:49.945 10159.396 - 10212.035: 60.9193% ( 111) 00:11:49.945 10212.035 - 10264.675: 61.5098% ( 65) 00:11:49.945 10264.675 - 10317.314: 61.9095% ( 44) 00:11:49.945 10317.314 - 10369.953: 62.1366% ( 25) 00:11:49.945 10369.953 - 10422.593: 62.3728% ( 26) 00:11:49.945 10422.593 - 10475.232: 62.5727% ( 22) 00:11:49.945 10475.232 - 10527.871: 62.7816% ( 23) 00:11:49.945 10527.871 - 10580.511: 62.9451% ( 18) 00:11:49.945 10580.511 - 10633.150: 63.2267% ( 31) 00:11:49.945 10633.150 - 10685.790: 63.4539% ( 25) 00:11:49.946 10685.790 - 10738.429: 63.6719% ( 24) 00:11:49.946 10738.429 - 10791.068: 63.7718% ( 11) 00:11:49.946 10791.068 - 10843.708: 63.8354% ( 7) 00:11:49.946 10843.708 - 10896.347: 63.8899% ( 6) 00:11:49.946 10896.347 - 10948.986: 63.9989% ( 12) 00:11:49.946 10948.986 - 11001.626: 64.1170% ( 13) 00:11:49.946 11001.626 - 11054.265: 64.4077% ( 32) 00:11:49.946 11054.265 - 11106.904: 65.0073% ( 66) 00:11:49.946 11106.904 - 11159.544: 65.4070% ( 44) 00:11:49.946 11159.544 - 11212.183: 65.4978% ( 10) 00:11:49.946 11212.183 - 11264.822: 65.6068% ( 12) 00:11:49.946 11264.822 - 11317.462: 65.7340% ( 14) 00:11:49.946 11317.462 - 11370.101: 65.8249% ( 10) 00:11:49.946 11370.101 - 11422.741: 65.8884% ( 7) 00:11:49.946 11422.741 - 11475.380: 65.9702% ( 9) 00:11:49.946 11475.380 - 11528.019: 66.1246% ( 17) 00:11:49.946 11528.019 - 11580.659: 66.2609% ( 15) 00:11:49.946 11580.659 - 11633.298: 66.3790% ( 13) 00:11:49.946 11633.298 - 11685.937: 66.5153% ( 15) 00:11:49.946 11685.937 - 11738.577: 66.7060% ( 21) 00:11:49.946 11738.577 - 11791.216: 66.9059% ( 22) 00:11:49.946 11791.216 - 11843.855: 67.1602% ( 28) 00:11:49.946 11843.855 - 11896.495: 67.5418% ( 42) 00:11:49.946 11896.495 - 11949.134: 67.6690% ( 14) 00:11:49.946 11949.134 - 12001.773: 67.7871% ( 13) 00:11:49.946 12001.773 - 12054.413: 67.9778% ( 21) 00:11:49.946 12054.413 - 12107.052: 68.3866% ( 45) 00:11:49.946 12107.052 - 12159.692: 68.6682% ( 31) 00:11:49.946 12159.692 - 12212.331: 68.9953% ( 36) 00:11:49.946 12212.331 - 12264.970: 69.3041% ( 34) 00:11:49.946 12264.970 - 12317.610: 69.4222% ( 13) 00:11:49.946 12317.610 - 12370.249: 69.5403% ( 13) 00:11:49.946 12370.249 - 12422.888: 69.6312% ( 10) 00:11:49.946 12422.888 - 12475.528: 69.7311% ( 11) 00:11:49.946 12475.528 - 12528.167: 69.8401% ( 12) 00:11:49.946 12528.167 - 12580.806: 69.9310% ( 10) 00:11:49.946 12580.806 - 12633.446: 70.0581% ( 14) 00:11:49.946 12633.446 - 12686.085: 70.2217% ( 18) 00:11:49.946 12686.085 - 12738.724: 70.3852% ( 18) 00:11:49.946 12738.724 - 12791.364: 70.5941% ( 23) 00:11:49.946 12791.364 - 12844.003: 70.7485% ( 17) 00:11:49.946 12844.003 - 12896.643: 70.9393% ( 21) 00:11:49.946 12896.643 - 12949.282: 71.1119% ( 19) 00:11:49.946 12949.282 - 13001.921: 71.3209% ( 23) 00:11:49.946 13001.921 - 13054.561: 71.4935% ( 19) 00:11:49.946 13054.561 - 13107.200: 71.6842% ( 21) 00:11:49.946 13107.200 - 13159.839: 71.9477% ( 29) 00:11:49.946 13159.839 - 13212.479: 72.2020% ( 28) 00:11:49.946 13212.479 - 13265.118: 72.3474% ( 16) 00:11:49.946 13265.118 - 13317.757: 72.5200% ( 19) 00:11:49.946 13317.757 - 13370.397: 72.7925% ( 30) 00:11:49.946 13370.397 - 13423.036: 72.9106% ( 13) 00:11:49.946 13423.036 - 13475.676: 73.0469% ( 15) 00:11:49.946 13475.676 - 13580.954: 73.4102% ( 40) 00:11:49.946 13580.954 - 13686.233: 73.8463% ( 48) 00:11:49.946 13686.233 - 13791.512: 74.1279% ( 31) 00:11:49.946 13791.512 - 13896.790: 74.5549% ( 47) 00:11:49.946 13896.790 - 14002.069: 75.0273% ( 52) 00:11:49.946 14002.069 - 14107.348: 75.4451% ( 46) 00:11:49.946 14107.348 - 14212.627: 75.7903% ( 38) 00:11:49.946 14212.627 - 14317.905: 76.3172% ( 58) 00:11:49.946 14317.905 - 14423.184: 77.0440% ( 80) 00:11:49.946 14423.184 - 14528.463: 77.8434% ( 88) 00:11:49.946 14528.463 - 14633.741: 78.3884% ( 60) 00:11:49.946 14633.741 - 14739.020: 79.0243% ( 70) 00:11:49.946 14739.020 - 14844.299: 79.7057% ( 75) 00:11:49.946 14844.299 - 14949.578: 80.3870% ( 75) 00:11:49.946 14949.578 - 15054.856: 80.9320% ( 60) 00:11:49.946 15054.856 - 15160.135: 81.2954% ( 40) 00:11:49.946 15160.135 - 15265.414: 81.7133% ( 46) 00:11:49.946 15265.414 - 15370.692: 82.1312% ( 46) 00:11:49.946 15370.692 - 15475.971: 82.6036% ( 52) 00:11:49.946 15475.971 - 15581.250: 83.3576% ( 83) 00:11:49.946 15581.250 - 15686.529: 83.9299% ( 63) 00:11:49.946 15686.529 - 15791.807: 84.7656% ( 92) 00:11:49.946 15791.807 - 15897.086: 85.4833% ( 79) 00:11:49.946 15897.086 - 16002.365: 86.3463% ( 95) 00:11:49.946 16002.365 - 16107.643: 86.9640% ( 68) 00:11:49.946 16107.643 - 16212.922: 87.5818% ( 68) 00:11:49.946 16212.922 - 16318.201: 88.3085% ( 80) 00:11:49.946 16318.201 - 16423.480: 88.8536% ( 60) 00:11:49.946 16423.480 - 16528.758: 89.3169% ( 51) 00:11:49.946 16528.758 - 16634.037: 89.7166% ( 44) 00:11:49.946 16634.037 - 16739.316: 90.2344% ( 57) 00:11:49.946 16739.316 - 16844.594: 90.8430% ( 67) 00:11:49.946 16844.594 - 16949.873: 91.2972% ( 50) 00:11:49.946 16949.873 - 17055.152: 91.7424% ( 49) 00:11:49.946 17055.152 - 17160.431: 92.0967% ( 39) 00:11:49.946 17160.431 - 17265.709: 92.4419% ( 38) 00:11:49.946 17265.709 - 17370.988: 92.8325% ( 43) 00:11:49.946 17370.988 - 17476.267: 93.4502% ( 68) 00:11:49.946 17476.267 - 17581.545: 93.8318% ( 42) 00:11:49.946 17581.545 - 17686.824: 94.1951% ( 40) 00:11:49.946 17686.824 - 17792.103: 94.5040% ( 34) 00:11:49.946 17792.103 - 17897.382: 94.8038% ( 33) 00:11:49.946 17897.382 - 18002.660: 94.9673% ( 18) 00:11:49.946 18002.660 - 18107.939: 95.0672% ( 11) 00:11:49.946 18107.939 - 18213.218: 95.1762% ( 12) 00:11:49.946 18213.218 - 18318.496: 95.2762% ( 11) 00:11:49.946 18318.496 - 18423.775: 95.4124% ( 15) 00:11:49.946 18423.775 - 18529.054: 95.6032% ( 21) 00:11:49.946 18529.054 - 18634.333: 95.9847% ( 42) 00:11:49.946 18634.333 - 18739.611: 96.2300% ( 27) 00:11:49.946 18739.611 - 18844.890: 96.3935% ( 18) 00:11:49.946 18844.890 - 18950.169: 96.6388% ( 27) 00:11:49.946 18950.169 - 19055.447: 97.0930% ( 50) 00:11:49.946 19055.447 - 19160.726: 97.7198% ( 69) 00:11:49.946 19160.726 - 19266.005: 97.8289% ( 12) 00:11:49.946 19266.005 - 19371.284: 97.9379% ( 12) 00:11:49.946 19371.284 - 19476.562: 98.0560% ( 13) 00:11:49.946 19476.562 - 19581.841: 98.1650% ( 12) 00:11:49.946 19581.841 - 19687.120: 98.2467% ( 9) 00:11:49.946 19687.120 - 19792.398: 98.3103% ( 7) 00:11:49.946 19792.398 - 19897.677: 98.3648% ( 6) 00:11:49.946 19897.677 - 20002.956: 98.4284% ( 7) 00:11:49.946 20002.956 - 20108.235: 98.6737% ( 27) 00:11:49.946 20108.235 - 20213.513: 98.7736% ( 11) 00:11:49.946 20213.513 - 20318.792: 98.8281% ( 6) 00:11:49.946 20318.792 - 20424.071: 98.8372% ( 1) 00:11:49.946 32215.287 - 32425.844: 98.8735% ( 4) 00:11:49.946 32425.844 - 32636.402: 98.9553% ( 9) 00:11:49.946 32636.402 - 32846.959: 99.0371% ( 9) 00:11:49.946 32846.959 - 33057.516: 99.1370% ( 11) 00:11:49.946 33057.516 - 33268.074: 99.2188% ( 9) 00:11:49.946 33268.074 - 33478.631: 99.2551% ( 4) 00:11:49.946 33478.631 - 33689.189: 99.2823% ( 3) 00:11:49.946 33689.189 - 33899.746: 99.3187% ( 4) 00:11:49.946 33899.746 - 34110.304: 99.3550% ( 4) 00:11:49.946 34110.304 - 34320.861: 99.3914% ( 4) 00:11:49.946 34320.861 - 34531.418: 99.4186% ( 3) 00:11:49.946 40848.141 - 41058.699: 99.5004% ( 9) 00:11:49.946 41058.699 - 41269.256: 99.5549% ( 6) 00:11:49.946 42111.486 - 42322.043: 99.5912% ( 4) 00:11:49.946 42322.043 - 42532.601: 99.6275% ( 4) 00:11:49.946 42532.601 - 42743.158: 99.6639% ( 4) 00:11:49.946 42743.158 - 42953.716: 99.7093% ( 5) 00:11:49.946 42953.716 - 43164.273: 99.7366% ( 3) 00:11:49.946 43164.273 - 43374.831: 99.7820% ( 5) 00:11:49.946 43374.831 - 43585.388: 99.8274% ( 5) 00:11:49.946 43585.388 - 43795.945: 99.8819% ( 6) 00:11:49.946 43795.945 - 44006.503: 99.9273% ( 5) 00:11:49.946 44006.503 - 44217.060: 99.9727% ( 5) 00:11:49.946 44217.060 - 44427.618: 100.0000% ( 3) 00:11:49.946 00:11:49.946 18:10:00 nvme.nvme_perf -- nvme/nvme.sh@24 -- # '[' -b /dev/ram0 ']' 00:11:49.946 00:11:49.946 real 0m2.670s 00:11:49.946 user 0m2.287s 00:11:49.946 sys 0m0.280s 00:11:49.946 18:10:00 nvme.nvme_perf -- common/autotest_common.sh@1130 -- # xtrace_disable 00:11:49.946 18:10:00 nvme.nvme_perf -- common/autotest_common.sh@10 -- # set +x 00:11:49.946 ************************************ 00:11:49.946 END TEST nvme_perf 00:11:49.946 ************************************ 00:11:49.946 18:10:00 nvme -- nvme/nvme.sh@87 -- # run_test nvme_hello_world /home/vagrant/spdk_repo/spdk/build/examples/hello_world -i 0 00:11:49.946 18:10:00 nvme -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:11:49.946 18:10:00 nvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:11:49.946 18:10:00 nvme -- common/autotest_common.sh@10 -- # set +x 00:11:49.946 ************************************ 00:11:49.946 START TEST nvme_hello_world 00:11:49.946 ************************************ 00:11:49.946 18:10:00 nvme.nvme_hello_world -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/build/examples/hello_world -i 0 00:11:50.206 Initializing NVMe Controllers 00:11:50.206 Attached to 0000:00:10.0 00:11:50.206 Namespace ID: 1 size: 6GB 00:11:50.206 Attached to 0000:00:11.0 00:11:50.206 Namespace ID: 1 size: 5GB 00:11:50.206 Attached to 0000:00:13.0 00:11:50.206 Namespace ID: 1 size: 1GB 00:11:50.206 Attached to 0000:00:12.0 00:11:50.206 Namespace ID: 1 size: 4GB 00:11:50.206 Namespace ID: 2 size: 4GB 00:11:50.206 Namespace ID: 3 size: 4GB 00:11:50.206 Initialization complete. 00:11:50.206 INFO: using host memory buffer for IO 00:11:50.206 Hello world! 00:11:50.206 INFO: using host memory buffer for IO 00:11:50.206 Hello world! 00:11:50.206 INFO: using host memory buffer for IO 00:11:50.206 Hello world! 00:11:50.206 INFO: using host memory buffer for IO 00:11:50.206 Hello world! 00:11:50.206 INFO: using host memory buffer for IO 00:11:50.206 Hello world! 00:11:50.206 INFO: using host memory buffer for IO 00:11:50.206 Hello world! 00:11:50.206 00:11:50.206 real 0m0.310s 00:11:50.206 user 0m0.126s 00:11:50.206 sys 0m0.142s 00:11:50.206 18:10:00 nvme.nvme_hello_world -- common/autotest_common.sh@1130 -- # xtrace_disable 00:11:50.206 18:10:00 nvme.nvme_hello_world -- common/autotest_common.sh@10 -- # set +x 00:11:50.206 ************************************ 00:11:50.206 END TEST nvme_hello_world 00:11:50.206 ************************************ 00:11:50.206 18:10:00 nvme -- nvme/nvme.sh@88 -- # run_test nvme_sgl /home/vagrant/spdk_repo/spdk/test/nvme/sgl/sgl 00:11:50.206 18:10:00 nvme -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:11:50.206 18:10:00 nvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:11:50.206 18:10:00 nvme -- common/autotest_common.sh@10 -- # set +x 00:11:50.206 ************************************ 00:11:50.206 START TEST nvme_sgl 00:11:50.206 ************************************ 00:11:50.206 18:10:00 nvme.nvme_sgl -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/nvme/sgl/sgl 00:11:50.464 0000:00:10.0: build_io_request_0 Invalid IO length parameter 00:11:50.464 0000:00:10.0: build_io_request_1 Invalid IO length parameter 00:11:50.464 0000:00:10.0: build_io_request_3 Invalid IO length parameter 00:11:50.464 0000:00:10.0: build_io_request_8 Invalid IO length parameter 00:11:50.464 0000:00:10.0: build_io_request_9 Invalid IO length parameter 00:11:50.464 0000:00:10.0: build_io_request_11 Invalid IO length parameter 00:11:50.464 0000:00:11.0: build_io_request_0 Invalid IO length parameter 00:11:50.464 0000:00:11.0: build_io_request_1 Invalid IO length parameter 00:11:50.464 0000:00:11.0: build_io_request_3 Invalid IO length parameter 00:11:50.464 0000:00:11.0: build_io_request_8 Invalid IO length parameter 00:11:50.464 0000:00:11.0: build_io_request_9 Invalid IO length parameter 00:11:50.464 0000:00:11.0: build_io_request_11 Invalid IO length parameter 00:11:50.464 0000:00:13.0: build_io_request_0 Invalid IO length parameter 00:11:50.464 0000:00:13.0: build_io_request_1 Invalid IO length parameter 00:11:50.464 0000:00:13.0: build_io_request_2 Invalid IO length parameter 00:11:50.464 0000:00:13.0: build_io_request_3 Invalid IO length parameter 00:11:50.464 0000:00:13.0: build_io_request_4 Invalid IO length parameter 00:11:50.464 0000:00:13.0: build_io_request_5 Invalid IO length parameter 00:11:50.464 0000:00:13.0: build_io_request_6 Invalid IO length parameter 00:11:50.464 0000:00:13.0: build_io_request_7 Invalid IO length parameter 00:11:50.464 0000:00:13.0: build_io_request_8 Invalid IO length parameter 00:11:50.464 0000:00:13.0: build_io_request_9 Invalid IO length parameter 00:11:50.464 0000:00:13.0: build_io_request_10 Invalid IO length parameter 00:11:50.464 0000:00:13.0: build_io_request_11 Invalid IO length parameter 00:11:50.464 0000:00:12.0: build_io_request_0 Invalid IO length parameter 00:11:50.464 0000:00:12.0: build_io_request_1 Invalid IO length parameter 00:11:50.464 0000:00:12.0: build_io_request_2 Invalid IO length parameter 00:11:50.464 0000:00:12.0: build_io_request_3 Invalid IO length parameter 00:11:50.464 0000:00:12.0: build_io_request_4 Invalid IO length parameter 00:11:50.464 0000:00:12.0: build_io_request_5 Invalid IO length parameter 00:11:50.464 0000:00:12.0: build_io_request_6 Invalid IO length parameter 00:11:50.464 0000:00:12.0: build_io_request_7 Invalid IO length parameter 00:11:50.464 0000:00:12.0: build_io_request_8 Invalid IO length parameter 00:11:50.464 0000:00:12.0: build_io_request_9 Invalid IO length parameter 00:11:50.464 0000:00:12.0: build_io_request_10 Invalid IO length parameter 00:11:50.464 0000:00:12.0: build_io_request_11 Invalid IO length parameter 00:11:50.464 NVMe Readv/Writev Request test 00:11:50.464 Attached to 0000:00:10.0 00:11:50.464 Attached to 0000:00:11.0 00:11:50.464 Attached to 0000:00:13.0 00:11:50.464 Attached to 0000:00:12.0 00:11:50.464 0000:00:10.0: build_io_request_2 test passed 00:11:50.464 0000:00:10.0: build_io_request_4 test passed 00:11:50.464 0000:00:10.0: build_io_request_5 test passed 00:11:50.464 0000:00:10.0: build_io_request_6 test passed 00:11:50.464 0000:00:10.0: build_io_request_7 test passed 00:11:50.464 0000:00:10.0: build_io_request_10 test passed 00:11:50.464 0000:00:11.0: build_io_request_2 test passed 00:11:50.464 0000:00:11.0: build_io_request_4 test passed 00:11:50.464 0000:00:11.0: build_io_request_5 test passed 00:11:50.464 0000:00:11.0: build_io_request_6 test passed 00:11:50.464 0000:00:11.0: build_io_request_7 test passed 00:11:50.464 0000:00:11.0: build_io_request_10 test passed 00:11:50.464 Cleaning up... 00:11:50.464 00:11:50.464 real 0m0.376s 00:11:50.464 user 0m0.176s 00:11:50.464 sys 0m0.155s 00:11:50.464 18:10:01 nvme.nvme_sgl -- common/autotest_common.sh@1130 -- # xtrace_disable 00:11:50.464 18:10:01 nvme.nvme_sgl -- common/autotest_common.sh@10 -- # set +x 00:11:50.464 ************************************ 00:11:50.464 END TEST nvme_sgl 00:11:50.464 ************************************ 00:11:50.722 18:10:01 nvme -- nvme/nvme.sh@89 -- # run_test nvme_e2edp /home/vagrant/spdk_repo/spdk/test/nvme/e2edp/nvme_dp 00:11:50.722 18:10:01 nvme -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:11:50.722 18:10:01 nvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:11:50.722 18:10:01 nvme -- common/autotest_common.sh@10 -- # set +x 00:11:50.722 ************************************ 00:11:50.722 START TEST nvme_e2edp 00:11:50.722 ************************************ 00:11:50.722 18:10:01 nvme.nvme_e2edp -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/nvme/e2edp/nvme_dp 00:11:50.980 NVMe Write/Read with End-to-End data protection test 00:11:50.980 Attached to 0000:00:10.0 00:11:50.980 Attached to 0000:00:11.0 00:11:50.980 Attached to 0000:00:13.0 00:11:50.980 Attached to 0000:00:12.0 00:11:50.980 Cleaning up... 00:11:50.980 00:11:50.980 real 0m0.306s 00:11:50.980 user 0m0.113s 00:11:50.980 sys 0m0.147s 00:11:50.980 18:10:01 nvme.nvme_e2edp -- common/autotest_common.sh@1130 -- # xtrace_disable 00:11:50.980 18:10:01 nvme.nvme_e2edp -- common/autotest_common.sh@10 -- # set +x 00:11:50.980 ************************************ 00:11:50.980 END TEST nvme_e2edp 00:11:50.980 ************************************ 00:11:50.980 18:10:01 nvme -- nvme/nvme.sh@90 -- # run_test nvme_reserve /home/vagrant/spdk_repo/spdk/test/nvme/reserve/reserve 00:11:50.980 18:10:01 nvme -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:11:50.980 18:10:01 nvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:11:50.980 18:10:01 nvme -- common/autotest_common.sh@10 -- # set +x 00:11:50.980 ************************************ 00:11:50.980 START TEST nvme_reserve 00:11:50.980 ************************************ 00:11:50.980 18:10:01 nvme.nvme_reserve -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/nvme/reserve/reserve 00:11:51.238 ===================================================== 00:11:51.238 NVMe Controller at PCI bus 0, device 16, function 0 00:11:51.238 ===================================================== 00:11:51.238 Reservations: Not Supported 00:11:51.238 ===================================================== 00:11:51.238 NVMe Controller at PCI bus 0, device 17, function 0 00:11:51.238 ===================================================== 00:11:51.238 Reservations: Not Supported 00:11:51.238 ===================================================== 00:11:51.238 NVMe Controller at PCI bus 0, device 19, function 0 00:11:51.238 ===================================================== 00:11:51.238 Reservations: Not Supported 00:11:51.238 ===================================================== 00:11:51.238 NVMe Controller at PCI bus 0, device 18, function 0 00:11:51.238 ===================================================== 00:11:51.238 Reservations: Not Supported 00:11:51.238 Reservation test passed 00:11:51.238 00:11:51.238 real 0m0.278s 00:11:51.238 user 0m0.103s 00:11:51.238 sys 0m0.130s 00:11:51.238 18:10:01 nvme.nvme_reserve -- common/autotest_common.sh@1130 -- # xtrace_disable 00:11:51.238 18:10:01 nvme.nvme_reserve -- common/autotest_common.sh@10 -- # set +x 00:11:51.238 ************************************ 00:11:51.238 END TEST nvme_reserve 00:11:51.238 ************************************ 00:11:51.238 18:10:01 nvme -- nvme/nvme.sh@91 -- # run_test nvme_err_injection /home/vagrant/spdk_repo/spdk/test/nvme/err_injection/err_injection 00:11:51.238 18:10:01 nvme -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:11:51.238 18:10:01 nvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:11:51.238 18:10:01 nvme -- common/autotest_common.sh@10 -- # set +x 00:11:51.497 ************************************ 00:11:51.497 START TEST nvme_err_injection 00:11:51.497 ************************************ 00:11:51.497 18:10:01 nvme.nvme_err_injection -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/nvme/err_injection/err_injection 00:11:51.755 NVMe Error Injection test 00:11:51.755 Attached to 0000:00:10.0 00:11:51.755 Attached to 0000:00:11.0 00:11:51.755 Attached to 0000:00:13.0 00:11:51.755 Attached to 0000:00:12.0 00:11:51.755 0000:00:10.0: get features failed as expected 00:11:51.755 0000:00:11.0: get features failed as expected 00:11:51.755 0000:00:13.0: get features failed as expected 00:11:51.755 0000:00:12.0: get features failed as expected 00:11:51.755 0000:00:10.0: get features successfully as expected 00:11:51.755 0000:00:11.0: get features successfully as expected 00:11:51.755 0000:00:13.0: get features successfully as expected 00:11:51.755 0000:00:12.0: get features successfully as expected 00:11:51.755 0000:00:10.0: read failed as expected 00:11:51.755 0000:00:11.0: read failed as expected 00:11:51.755 0000:00:13.0: read failed as expected 00:11:51.755 0000:00:12.0: read failed as expected 00:11:51.755 0000:00:11.0: read successfully as expected 00:11:51.755 0000:00:10.0: read successfully as expected 00:11:51.755 0000:00:13.0: read successfully as expected 00:11:51.755 0000:00:12.0: read successfully as expected 00:11:51.755 Cleaning up... 00:11:51.755 00:11:51.755 real 0m0.321s 00:11:51.755 user 0m0.124s 00:11:51.755 sys 0m0.154s 00:11:51.755 18:10:02 nvme.nvme_err_injection -- common/autotest_common.sh@1130 -- # xtrace_disable 00:11:51.755 18:10:02 nvme.nvme_err_injection -- common/autotest_common.sh@10 -- # set +x 00:11:51.755 ************************************ 00:11:51.755 END TEST nvme_err_injection 00:11:51.755 ************************************ 00:11:51.755 18:10:02 nvme -- nvme/nvme.sh@92 -- # run_test nvme_overhead /home/vagrant/spdk_repo/spdk/test/nvme/overhead/overhead -o 4096 -t 1 -H -i 0 00:11:51.755 18:10:02 nvme -- common/autotest_common.sh@1105 -- # '[' 9 -le 1 ']' 00:11:51.755 18:10:02 nvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:11:51.755 18:10:02 nvme -- common/autotest_common.sh@10 -- # set +x 00:11:51.755 ************************************ 00:11:51.755 START TEST nvme_overhead 00:11:51.755 ************************************ 00:11:51.755 18:10:02 nvme.nvme_overhead -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/nvme/overhead/overhead -o 4096 -t 1 -H -i 0 00:11:53.129 Initializing NVMe Controllers 00:11:53.129 Attached to 0000:00:10.0 00:11:53.129 Attached to 0000:00:11.0 00:11:53.129 Attached to 0000:00:13.0 00:11:53.129 Attached to 0000:00:12.0 00:11:53.129 Initialization complete. Launching workers. 00:11:53.129 submit (in ns) avg, min, max = 13563.6, 10850.6, 78943.8 00:11:53.129 complete (in ns) avg, min, max = 8671.1, 7715.7, 129720.5 00:11:53.129 00:11:53.129 Submit histogram 00:11:53.129 ================ 00:11:53.129 Range in us Cumulative Count 00:11:53.129 10.847 - 10.898: 0.0156% ( 1) 00:11:53.129 10.898 - 10.949: 0.0311% ( 1) 00:11:53.129 11.206 - 11.258: 0.0467% ( 1) 00:11:53.129 11.309 - 11.361: 0.0623% ( 1) 00:11:53.129 11.412 - 11.463: 0.0779% ( 1) 00:11:53.129 12.080 - 12.132: 0.1090% ( 2) 00:11:53.129 12.132 - 12.183: 0.1246% ( 1) 00:11:53.129 12.183 - 12.235: 0.2336% ( 7) 00:11:53.129 12.235 - 12.286: 0.3426% ( 7) 00:11:53.129 12.286 - 12.337: 0.5139% ( 11) 00:11:53.129 12.337 - 12.389: 0.9343% ( 27) 00:11:53.129 12.389 - 12.440: 1.6817% ( 48) 00:11:53.129 12.440 - 12.492: 2.8184% ( 73) 00:11:53.129 12.492 - 12.543: 4.1887% ( 88) 00:11:53.129 12.543 - 12.594: 5.8237% ( 105) 00:11:53.129 12.594 - 12.646: 7.4587% ( 105) 00:11:53.129 12.646 - 12.697: 9.1560% ( 109) 00:11:53.129 12.697 - 12.749: 11.0402% ( 121) 00:11:53.129 12.749 - 12.800: 13.4538% ( 155) 00:11:53.129 12.800 - 12.851: 16.5369% ( 198) 00:11:53.129 12.851 - 12.903: 20.8969% ( 280) 00:11:53.129 12.903 - 12.954: 25.2569% ( 280) 00:11:53.129 12.954 - 13.006: 29.7415% ( 288) 00:11:53.129 13.006 - 13.057: 35.0202% ( 339) 00:11:53.129 13.057 - 13.108: 40.6260% ( 360) 00:11:53.129 13.108 - 13.160: 45.8580% ( 336) 00:11:53.129 13.160 - 13.263: 57.6300% ( 756) 00:11:53.129 13.263 - 13.365: 67.4400% ( 630) 00:11:53.129 13.365 - 13.468: 76.0978% ( 556) 00:11:53.129 13.468 - 13.571: 81.9994% ( 379) 00:11:53.129 13.571 - 13.674: 85.8922% ( 250) 00:11:53.129 13.674 - 13.777: 88.6017% ( 174) 00:11:53.129 13.777 - 13.880: 90.2834% ( 108) 00:11:53.129 13.880 - 13.982: 91.3890% ( 71) 00:11:53.129 13.982 - 14.085: 92.0741% ( 44) 00:11:53.129 14.085 - 14.188: 92.6036% ( 34) 00:11:53.129 14.188 - 14.291: 92.9306% ( 21) 00:11:53.129 14.291 - 14.394: 93.1330% ( 13) 00:11:53.129 14.394 - 14.496: 93.3043% ( 11) 00:11:53.129 14.496 - 14.599: 93.3977% ( 6) 00:11:53.129 14.599 - 14.702: 93.4911% ( 6) 00:11:53.129 14.702 - 14.805: 93.5690% ( 5) 00:11:53.129 14.805 - 14.908: 93.6624% ( 6) 00:11:53.129 14.908 - 15.010: 93.7247% ( 4) 00:11:53.129 15.010 - 15.113: 93.7403% ( 1) 00:11:53.129 15.113 - 15.216: 93.7870% ( 3) 00:11:53.129 15.216 - 15.319: 93.8181% ( 2) 00:11:53.129 15.422 - 15.524: 93.8648% ( 3) 00:11:53.129 15.524 - 15.627: 93.8804% ( 1) 00:11:53.129 15.730 - 15.833: 93.9116% ( 2) 00:11:53.129 15.833 - 15.936: 93.9271% ( 1) 00:11:53.129 15.936 - 16.039: 93.9738% ( 3) 00:11:53.129 16.039 - 16.141: 93.9894% ( 1) 00:11:53.129 16.141 - 16.244: 94.0517% ( 4) 00:11:53.129 16.244 - 16.347: 94.1296% ( 5) 00:11:53.129 16.347 - 16.450: 94.1607% ( 2) 00:11:53.129 16.450 - 16.553: 94.2074% ( 3) 00:11:53.129 16.553 - 16.655: 94.2386% ( 2) 00:11:53.129 16.655 - 16.758: 94.3164% ( 5) 00:11:53.129 16.758 - 16.861: 94.3943% ( 5) 00:11:53.129 16.861 - 16.964: 94.4877% ( 6) 00:11:53.129 16.964 - 17.067: 94.6123% ( 8) 00:11:53.129 17.067 - 17.169: 94.7680% ( 10) 00:11:53.129 17.169 - 17.272: 94.9237% ( 10) 00:11:53.129 17.272 - 17.375: 95.1106% ( 12) 00:11:53.129 17.375 - 17.478: 95.2818% ( 11) 00:11:53.129 17.478 - 17.581: 95.3908% ( 7) 00:11:53.129 17.581 - 17.684: 95.7334% ( 22) 00:11:53.129 17.684 - 17.786: 95.9826% ( 16) 00:11:53.129 17.786 - 17.889: 96.2161% ( 15) 00:11:53.129 17.889 - 17.992: 96.5120% ( 19) 00:11:53.129 17.992 - 18.095: 96.6521% ( 9) 00:11:53.129 18.095 - 18.198: 96.6988% ( 3) 00:11:53.129 18.198 - 18.300: 96.8234% ( 8) 00:11:53.129 18.300 - 18.403: 97.0258% ( 13) 00:11:53.129 18.403 - 18.506: 97.1193% ( 6) 00:11:53.129 18.506 - 18.609: 97.2750% ( 10) 00:11:53.129 18.609 - 18.712: 97.4151% ( 9) 00:11:53.129 18.712 - 18.814: 97.5397% ( 8) 00:11:53.129 18.814 - 18.917: 97.6643% ( 8) 00:11:53.129 18.917 - 19.020: 97.8044% ( 9) 00:11:53.129 19.020 - 19.123: 97.9290% ( 8) 00:11:53.129 19.123 - 19.226: 97.9601% ( 2) 00:11:53.129 19.226 - 19.329: 98.0536% ( 6) 00:11:53.129 19.329 - 19.431: 98.1470% ( 6) 00:11:53.129 19.431 - 19.534: 98.2093% ( 4) 00:11:53.129 19.534 - 19.637: 98.2871% ( 5) 00:11:53.129 19.637 - 19.740: 98.3650% ( 5) 00:11:53.129 19.740 - 19.843: 98.4273% ( 4) 00:11:53.129 19.843 - 19.945: 98.5051% ( 5) 00:11:53.129 19.945 - 20.048: 98.5830% ( 5) 00:11:53.129 20.048 - 20.151: 98.6764% ( 6) 00:11:53.129 20.151 - 20.254: 98.7076% ( 2) 00:11:53.129 20.254 - 20.357: 98.7387% ( 2) 00:11:53.129 20.357 - 20.459: 98.7854% ( 3) 00:11:53.129 20.459 - 20.562: 98.8166% ( 2) 00:11:53.129 20.562 - 20.665: 98.8944% ( 5) 00:11:53.129 20.665 - 20.768: 98.9100% ( 1) 00:11:53.129 20.768 - 20.871: 98.9723% ( 4) 00:11:53.129 20.871 - 20.973: 98.9879% ( 1) 00:11:53.129 20.973 - 21.076: 99.0034% ( 1) 00:11:53.129 21.076 - 21.179: 99.0346% ( 2) 00:11:53.129 21.179 - 21.282: 99.0813% ( 3) 00:11:53.129 21.282 - 21.385: 99.0969% ( 1) 00:11:53.129 21.385 - 21.488: 99.1280% ( 2) 00:11:53.129 21.488 - 21.590: 99.1591% ( 2) 00:11:53.129 21.590 - 21.693: 99.1903% ( 2) 00:11:53.129 21.693 - 21.796: 99.2059% ( 1) 00:11:53.129 21.796 - 21.899: 99.2370% ( 2) 00:11:53.129 21.899 - 22.002: 99.2526% ( 1) 00:11:53.129 22.104 - 22.207: 99.2993% ( 3) 00:11:53.129 22.413 - 22.516: 99.3149% ( 1) 00:11:53.129 22.516 - 22.618: 99.3304% ( 1) 00:11:53.129 22.721 - 22.824: 99.3460% ( 1) 00:11:53.129 22.927 - 23.030: 99.3771% ( 2) 00:11:53.129 23.030 - 23.133: 99.4239% ( 3) 00:11:53.129 23.133 - 23.235: 99.4394% ( 1) 00:11:53.129 23.235 - 23.338: 99.4706% ( 2) 00:11:53.129 23.441 - 23.544: 99.4861% ( 1) 00:11:53.129 23.544 - 23.647: 99.5017% ( 1) 00:11:53.129 23.647 - 23.749: 99.5173% ( 1) 00:11:53.129 23.749 - 23.852: 99.5329% ( 1) 00:11:53.129 24.161 - 24.263: 99.5484% ( 1) 00:11:53.129 24.469 - 24.572: 99.5640% ( 1) 00:11:53.129 24.675 - 24.778: 99.5796% ( 1) 00:11:53.129 25.394 - 25.497: 99.5951% ( 1) 00:11:53.129 25.600 - 25.703: 99.6107% ( 1) 00:11:53.129 25.703 - 25.806: 99.6263% ( 1) 00:11:53.129 25.806 - 25.908: 99.6419% ( 1) 00:11:53.129 26.217 - 26.320: 99.7197% ( 5) 00:11:53.129 26.320 - 26.525: 99.7353% ( 1) 00:11:53.129 26.525 - 26.731: 99.7509% ( 1) 00:11:53.129 26.731 - 26.937: 99.7664% ( 1) 00:11:53.129 27.142 - 27.348: 99.7820% ( 1) 00:11:53.129 27.348 - 27.553: 99.7976% ( 1) 00:11:53.129 27.965 - 28.170: 99.8131% ( 1) 00:11:53.129 28.376 - 28.582: 99.8287% ( 1) 00:11:53.129 28.582 - 28.787: 99.8443% ( 1) 00:11:53.129 28.993 - 29.198: 99.8599% ( 1) 00:11:53.129 29.815 - 30.021: 99.8754% ( 1) 00:11:53.129 30.638 - 30.843: 99.8910% ( 1) 00:11:53.129 30.843 - 31.049: 99.9066% ( 1) 00:11:53.129 31.255 - 31.460: 99.9221% ( 1) 00:11:53.129 32.077 - 32.283: 99.9377% ( 1) 00:11:53.129 33.928 - 34.133: 99.9533% ( 1) 00:11:53.129 40.096 - 40.302: 99.9689% ( 1) 00:11:53.129 42.358 - 42.564: 99.9844% ( 1) 00:11:53.129 78.548 - 78.959: 100.0000% ( 1) 00:11:53.129 00:11:53.129 Complete histogram 00:11:53.129 ================== 00:11:53.129 Range in us Cumulative Count 00:11:53.129 7.711 - 7.762: 0.2180% ( 14) 00:11:53.129 7.762 - 7.814: 1.4793% ( 81) 00:11:53.129 7.814 - 7.865: 6.9604% ( 352) 00:11:53.129 7.865 - 7.916: 17.2220% ( 659) 00:11:53.129 7.916 - 7.968: 28.5269% ( 726) 00:11:53.129 7.968 - 8.019: 38.9910% ( 672) 00:11:53.129 8.019 - 8.071: 49.1591% ( 653) 00:11:53.129 8.071 - 8.122: 56.8359% ( 493) 00:11:53.129 8.122 - 8.173: 63.1579% ( 406) 00:11:53.129 8.173 - 8.225: 67.0352% ( 249) 00:11:53.129 8.225 - 8.276: 70.0249% ( 192) 00:11:53.129 8.276 - 8.328: 72.8589% ( 182) 00:11:53.129 8.328 - 8.379: 74.7742% ( 123) 00:11:53.129 8.379 - 8.431: 76.3158% ( 99) 00:11:53.129 8.431 - 8.482: 77.4525% ( 73) 00:11:53.129 8.482 - 8.533: 78.4491% ( 64) 00:11:53.129 8.533 - 8.585: 79.2277% ( 50) 00:11:53.129 8.585 - 8.636: 79.8661% ( 41) 00:11:53.129 8.636 - 8.688: 80.4578% ( 38) 00:11:53.129 8.688 - 8.739: 80.9561% ( 32) 00:11:53.129 8.739 - 8.790: 81.7814% ( 53) 00:11:53.129 8.790 - 8.842: 82.8247% ( 67) 00:11:53.129 8.842 - 8.893: 83.8835% ( 68) 00:11:53.129 8.893 - 8.945: 85.0047% ( 72) 00:11:53.129 8.945 - 8.996: 86.2660% ( 81) 00:11:53.129 8.996 - 9.047: 87.3715% ( 71) 00:11:53.129 9.047 - 9.099: 88.4771% ( 71) 00:11:53.129 9.099 - 9.150: 89.7228% ( 80) 00:11:53.129 9.150 - 9.202: 90.5793% ( 55) 00:11:53.129 9.202 - 9.253: 91.6070% ( 66) 00:11:53.129 9.253 - 9.304: 92.2454% ( 41) 00:11:53.129 9.304 - 9.356: 92.7593% ( 33) 00:11:53.129 9.356 - 9.407: 93.0863% ( 21) 00:11:53.129 9.407 - 9.459: 93.3043% ( 14) 00:11:53.129 9.459 - 9.510: 93.5378% ( 15) 00:11:53.129 9.510 - 9.561: 93.6780% ( 9) 00:11:53.129 9.561 - 9.613: 93.8493% ( 11) 00:11:53.129 9.613 - 9.664: 93.9583% ( 7) 00:11:53.129 9.664 - 9.716: 94.1451% ( 12) 00:11:53.129 9.716 - 9.767: 94.1763% ( 2) 00:11:53.129 9.767 - 9.818: 94.2853% ( 7) 00:11:53.129 9.818 - 9.870: 94.3631% ( 5) 00:11:53.129 9.870 - 9.921: 94.4254% ( 4) 00:11:53.129 9.921 - 9.973: 94.5033% ( 5) 00:11:53.129 9.973 - 10.024: 94.6123% ( 7) 00:11:53.129 10.024 - 10.076: 94.7524% ( 9) 00:11:53.129 10.076 - 10.127: 94.7991% ( 3) 00:11:53.129 10.127 - 10.178: 94.8926% ( 6) 00:11:53.129 10.178 - 10.230: 94.9393% ( 3) 00:11:53.129 10.230 - 10.281: 94.9860% ( 3) 00:11:53.129 10.281 - 10.333: 95.0171% ( 2) 00:11:53.129 10.333 - 10.384: 95.0483% ( 2) 00:11:53.129 10.384 - 10.435: 95.0638% ( 1) 00:11:53.129 10.435 - 10.487: 95.0794% ( 1) 00:11:53.129 10.487 - 10.538: 95.0950% ( 1) 00:11:53.129 10.538 - 10.590: 95.1417% ( 3) 00:11:53.129 10.590 - 10.641: 95.1728% ( 2) 00:11:53.129 10.641 - 10.692: 95.2196% ( 3) 00:11:53.129 10.692 - 10.744: 95.2663% ( 3) 00:11:53.129 10.744 - 10.795: 95.2974% ( 2) 00:11:53.130 10.795 - 10.847: 95.3753% ( 5) 00:11:53.130 10.847 - 10.898: 95.4064% ( 2) 00:11:53.130 10.898 - 10.949: 95.4531% ( 3) 00:11:53.130 10.949 - 11.001: 95.5154% ( 4) 00:11:53.130 11.001 - 11.052: 95.5466% ( 2) 00:11:53.130 11.052 - 11.104: 95.6088% ( 4) 00:11:53.130 11.104 - 11.155: 95.6867% ( 5) 00:11:53.130 11.206 - 11.258: 95.7023% ( 1) 00:11:53.130 11.258 - 11.309: 95.7334% ( 2) 00:11:53.130 11.361 - 11.412: 95.7490% ( 1) 00:11:53.130 11.412 - 11.463: 95.7646% ( 1) 00:11:53.130 11.463 - 11.515: 95.7801% ( 1) 00:11:53.130 11.566 - 11.618: 95.7957% ( 1) 00:11:53.130 11.618 - 11.669: 95.8113% ( 1) 00:11:53.130 11.669 - 11.720: 95.8424% ( 2) 00:11:53.130 11.720 - 11.772: 95.8736% ( 2) 00:11:53.130 11.823 - 11.875: 95.8891% ( 1) 00:11:53.130 11.926 - 11.978: 95.9047% ( 1) 00:11:53.130 11.978 - 12.029: 95.9358% ( 2) 00:11:53.130 12.029 - 12.080: 95.9514% ( 1) 00:11:53.130 12.080 - 12.132: 95.9670% ( 1) 00:11:53.130 12.132 - 12.183: 95.9826% ( 1) 00:11:53.130 12.235 - 12.286: 96.0293% ( 3) 00:11:53.130 12.286 - 12.337: 96.0604% ( 2) 00:11:53.130 12.389 - 12.440: 96.0760% ( 1) 00:11:53.130 12.543 - 12.594: 96.0916% ( 1) 00:11:53.130 12.646 - 12.697: 96.1071% ( 1) 00:11:53.130 12.697 - 12.749: 96.1538% ( 3) 00:11:53.130 12.749 - 12.800: 96.1694% ( 1) 00:11:53.130 12.800 - 12.851: 96.1850% ( 1) 00:11:53.130 13.006 - 13.057: 96.2006% ( 1) 00:11:53.130 13.057 - 13.108: 96.2161% ( 1) 00:11:53.130 13.160 - 13.263: 96.2317% ( 1) 00:11:53.130 13.263 - 13.365: 96.3096% ( 5) 00:11:53.130 13.365 - 13.468: 96.3874% ( 5) 00:11:53.130 13.468 - 13.571: 96.4808% ( 6) 00:11:53.130 13.571 - 13.674: 96.5587% ( 5) 00:11:53.130 13.674 - 13.777: 96.6521% ( 6) 00:11:53.130 13.777 - 13.880: 96.7300% ( 5) 00:11:53.130 13.880 - 13.982: 96.7767% ( 3) 00:11:53.130 13.982 - 14.085: 96.8390% ( 4) 00:11:53.130 14.085 - 14.188: 96.9324% ( 6) 00:11:53.130 14.188 - 14.291: 97.0726% ( 9) 00:11:53.130 14.291 - 14.394: 97.1660% ( 6) 00:11:53.130 14.394 - 14.496: 97.1971% ( 2) 00:11:53.130 14.496 - 14.599: 97.2283% ( 2) 00:11:53.130 14.599 - 14.702: 97.2594% ( 2) 00:11:53.130 14.702 - 14.805: 97.3061% ( 3) 00:11:53.130 14.805 - 14.908: 97.3684% ( 4) 00:11:53.130 14.908 - 15.010: 97.3996% ( 2) 00:11:53.130 15.010 - 15.113: 97.4307% ( 2) 00:11:53.130 15.113 - 15.216: 97.4463% ( 1) 00:11:53.130 15.319 - 15.422: 97.4618% ( 1) 00:11:53.130 15.422 - 15.524: 97.4774% ( 1) 00:11:53.130 15.524 - 15.627: 97.4930% ( 1) 00:11:53.130 15.627 - 15.730: 97.5241% ( 2) 00:11:53.130 15.833 - 15.936: 97.5553% ( 2) 00:11:53.130 15.936 - 16.039: 97.5864% ( 2) 00:11:53.130 16.039 - 16.141: 97.6020% ( 1) 00:11:53.130 16.141 - 16.244: 97.6176% ( 1) 00:11:53.130 16.244 - 16.347: 97.6487% ( 2) 00:11:53.130 16.347 - 16.450: 97.6643% ( 1) 00:11:53.130 16.553 - 16.655: 97.7266% ( 4) 00:11:53.130 16.655 - 16.758: 97.7421% ( 1) 00:11:53.130 16.861 - 16.964: 97.7889% ( 3) 00:11:53.130 16.964 - 17.067: 97.8044% ( 1) 00:11:53.130 17.067 - 17.169: 97.8356% ( 2) 00:11:53.130 17.169 - 17.272: 97.8979% ( 4) 00:11:53.130 17.272 - 17.375: 97.9290% ( 2) 00:11:53.130 17.375 - 17.478: 98.0069% ( 5) 00:11:53.130 17.478 - 17.581: 98.0691% ( 4) 00:11:53.130 17.581 - 17.684: 98.0847% ( 1) 00:11:53.130 17.684 - 17.786: 98.1937% ( 7) 00:11:53.130 17.786 - 17.889: 98.2093% ( 1) 00:11:53.130 17.889 - 17.992: 98.2560% ( 3) 00:11:53.130 17.992 - 18.095: 98.3650% ( 7) 00:11:53.130 18.095 - 18.198: 98.4896% ( 8) 00:11:53.130 18.198 - 18.300: 98.5519% ( 4) 00:11:53.130 18.300 - 18.403: 98.5674% ( 1) 00:11:53.130 18.403 - 18.506: 98.6453% ( 5) 00:11:53.130 18.609 - 18.712: 98.6920% ( 3) 00:11:53.130 18.712 - 18.814: 98.7543% ( 4) 00:11:53.130 18.814 - 18.917: 98.8010% ( 3) 00:11:53.130 18.917 - 19.020: 98.8477% ( 3) 00:11:53.130 19.020 - 19.123: 98.8944% ( 3) 00:11:53.130 19.123 - 19.226: 98.9879% ( 6) 00:11:53.130 19.226 - 19.329: 99.0657% ( 5) 00:11:53.130 19.329 - 19.431: 99.0969% ( 2) 00:11:53.130 19.534 - 19.637: 99.1124% ( 1) 00:11:53.130 19.637 - 19.740: 99.1280% ( 1) 00:11:53.130 19.740 - 19.843: 99.1436% ( 1) 00:11:53.130 19.843 - 19.945: 99.1591% ( 1) 00:11:53.130 19.945 - 20.048: 99.2214% ( 4) 00:11:53.130 20.151 - 20.254: 99.2370% ( 1) 00:11:53.130 20.254 - 20.357: 99.2681% ( 2) 00:11:53.130 20.357 - 20.459: 99.3149% ( 3) 00:11:53.130 20.459 - 20.562: 99.3616% ( 3) 00:11:53.130 20.562 - 20.665: 99.4239% ( 4) 00:11:53.130 20.665 - 20.768: 99.4394% ( 1) 00:11:53.130 20.768 - 20.871: 99.4550% ( 1) 00:11:53.130 20.871 - 20.973: 99.5017% ( 3) 00:11:53.130 21.076 - 21.179: 99.5329% ( 2) 00:11:53.130 21.282 - 21.385: 99.5484% ( 1) 00:11:53.130 21.385 - 21.488: 99.5796% ( 2) 00:11:53.130 21.590 - 21.693: 99.5951% ( 1) 00:11:53.130 22.310 - 22.413: 99.6107% ( 1) 00:11:53.130 22.516 - 22.618: 99.6263% ( 1) 00:11:53.130 23.133 - 23.235: 99.6419% ( 1) 00:11:53.130 23.338 - 23.441: 99.6574% ( 1) 00:11:53.130 23.647 - 23.749: 99.6730% ( 1) 00:11:53.130 24.469 - 24.572: 99.7041% ( 2) 00:11:53.130 24.572 - 24.675: 99.7353% ( 2) 00:11:53.130 24.675 - 24.778: 99.7509% ( 1) 00:11:53.130 25.189 - 25.292: 99.7820% ( 2) 00:11:53.130 25.292 - 25.394: 99.7976% ( 1) 00:11:53.130 25.394 - 25.497: 99.8131% ( 1) 00:11:53.130 26.320 - 26.525: 99.8287% ( 1) 00:11:53.130 26.525 - 26.731: 99.8443% ( 1) 00:11:53.130 30.227 - 30.432: 99.8599% ( 1) 00:11:53.130 31.255 - 31.460: 99.8754% ( 1) 00:11:53.130 36.190 - 36.395: 99.8910% ( 1) 00:11:53.130 36.806 - 37.012: 99.9221% ( 2) 00:11:53.130 37.218 - 37.423: 99.9377% ( 1) 00:11:53.130 40.302 - 40.508: 99.9533% ( 1) 00:11:53.130 82.660 - 83.071: 99.9689% ( 1) 00:11:53.130 115.971 - 116.794: 99.9844% ( 1) 00:11:53.130 129.131 - 129.953: 100.0000% ( 1) 00:11:53.130 00:11:53.130 00:11:53.130 real 0m1.293s 00:11:53.130 user 0m1.109s 00:11:53.130 sys 0m0.138s 00:11:53.130 18:10:03 nvme.nvme_overhead -- common/autotest_common.sh@1130 -- # xtrace_disable 00:11:53.130 18:10:03 nvme.nvme_overhead -- common/autotest_common.sh@10 -- # set +x 00:11:53.130 ************************************ 00:11:53.130 END TEST nvme_overhead 00:11:53.130 ************************************ 00:11:53.130 18:10:03 nvme -- nvme/nvme.sh@93 -- # run_test nvme_arbitration /home/vagrant/spdk_repo/spdk/build/examples/arbitration -t 3 -i 0 00:11:53.130 18:10:03 nvme -- common/autotest_common.sh@1105 -- # '[' 6 -le 1 ']' 00:11:53.130 18:10:03 nvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:11:53.130 18:10:03 nvme -- common/autotest_common.sh@10 -- # set +x 00:11:53.130 ************************************ 00:11:53.130 START TEST nvme_arbitration 00:11:53.130 ************************************ 00:11:53.130 18:10:03 nvme.nvme_arbitration -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/build/examples/arbitration -t 3 -i 0 00:11:56.417 Initializing NVMe Controllers 00:11:56.417 Attached to 0000:00:10.0 00:11:56.417 Attached to 0000:00:11.0 00:11:56.417 Attached to 0000:00:13.0 00:11:56.417 Attached to 0000:00:12.0 00:11:56.417 Associating QEMU NVMe Ctrl (12340 ) with lcore 0 00:11:56.417 Associating QEMU NVMe Ctrl (12341 ) with lcore 1 00:11:56.417 Associating QEMU NVMe Ctrl (12343 ) with lcore 2 00:11:56.417 Associating QEMU NVMe Ctrl (12342 ) with lcore 3 00:11:56.417 Associating QEMU NVMe Ctrl (12342 ) with lcore 0 00:11:56.417 Associating QEMU NVMe Ctrl (12342 ) with lcore 1 00:11:56.417 /home/vagrant/spdk_repo/spdk/build/examples/arbitration run with configuration: 00:11:56.417 /home/vagrant/spdk_repo/spdk/build/examples/arbitration -q 64 -s 131072 -w randrw -M 50 -l 0 -t 3 -c 0xf -m 0 -a 0 -b 0 -n 100000 -i 0 00:11:56.417 Initialization complete. Launching workers. 00:11:56.417 Starting thread on core 1 with urgent priority queue 00:11:56.417 Starting thread on core 2 with urgent priority queue 00:11:56.417 Starting thread on core 3 with urgent priority queue 00:11:56.417 Starting thread on core 0 with urgent priority queue 00:11:56.417 QEMU NVMe Ctrl (12340 ) core 0: 405.33 IO/s 246.71 secs/100000 ios 00:11:56.417 QEMU NVMe Ctrl (12342 ) core 0: 405.33 IO/s 246.71 secs/100000 ios 00:11:56.417 QEMU NVMe Ctrl (12341 ) core 1: 448.00 IO/s 223.21 secs/100000 ios 00:11:56.417 QEMU NVMe Ctrl (12342 ) core 1: 448.00 IO/s 223.21 secs/100000 ios 00:11:56.417 QEMU NVMe Ctrl (12343 ) core 2: 469.33 IO/s 213.07 secs/100000 ios 00:11:56.417 QEMU NVMe Ctrl (12342 ) core 3: 832.00 IO/s 120.19 secs/100000 ios 00:11:56.417 ======================================================== 00:11:56.417 00:11:56.675 00:11:56.675 real 0m3.440s 00:11:56.675 user 0m9.356s 00:11:56.675 sys 0m0.187s 00:11:56.675 18:10:06 nvme.nvme_arbitration -- common/autotest_common.sh@1130 -- # xtrace_disable 00:11:56.675 ************************************ 00:11:56.675 END TEST nvme_arbitration 00:11:56.675 ************************************ 00:11:56.675 18:10:06 nvme.nvme_arbitration -- common/autotest_common.sh@10 -- # set +x 00:11:56.675 18:10:07 nvme -- nvme/nvme.sh@94 -- # run_test nvme_single_aen /home/vagrant/spdk_repo/spdk/test/nvme/aer/aer -T -i 0 00:11:56.675 18:10:07 nvme -- common/autotest_common.sh@1105 -- # '[' 5 -le 1 ']' 00:11:56.675 18:10:07 nvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:11:56.675 18:10:07 nvme -- common/autotest_common.sh@10 -- # set +x 00:11:56.675 ************************************ 00:11:56.675 START TEST nvme_single_aen 00:11:56.675 ************************************ 00:11:56.675 18:10:07 nvme.nvme_single_aen -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/nvme/aer/aer -T -i 0 00:11:56.935 Asynchronous Event Request test 00:11:56.935 Attached to 0000:00:10.0 00:11:56.935 Attached to 0000:00:11.0 00:11:56.935 Attached to 0000:00:13.0 00:11:56.935 Attached to 0000:00:12.0 00:11:56.935 Reset controller to setup AER completions for this process 00:11:56.935 Registering asynchronous event callbacks... 00:11:56.935 Getting orig temperature thresholds of all controllers 00:11:56.935 0000:00:10.0: original temperature threshold: 343 Kelvin (70 Celsius) 00:11:56.935 0000:00:11.0: original temperature threshold: 343 Kelvin (70 Celsius) 00:11:56.935 0000:00:13.0: original temperature threshold: 343 Kelvin (70 Celsius) 00:11:56.935 0000:00:12.0: original temperature threshold: 343 Kelvin (70 Celsius) 00:11:56.935 Setting all controllers temperature threshold low to trigger AER 00:11:56.935 Waiting for all controllers temperature threshold to be set lower 00:11:56.935 0000:00:10.0: aer_cb for log page 2, aen_event_type: 0x01, aen_event_info: 0x01 00:11:56.935 aer_cb - Resetting Temp Threshold for device: 0000:00:10.0 00:11:56.935 0000:00:11.0: aer_cb for log page 2, aen_event_type: 0x01, aen_event_info: 0x01 00:11:56.935 aer_cb - Resetting Temp Threshold for device: 0000:00:11.0 00:11:56.935 0000:00:13.0: aer_cb for log page 2, aen_event_type: 0x01, aen_event_info: 0x01 00:11:56.935 aer_cb - Resetting Temp Threshold for device: 0000:00:13.0 00:11:56.935 0000:00:12.0: aer_cb for log page 2, aen_event_type: 0x01, aen_event_info: 0x01 00:11:56.935 aer_cb - Resetting Temp Threshold for device: 0000:00:12.0 00:11:56.935 Waiting for all controllers to trigger AER and reset threshold 00:11:56.935 0000:00:10.0: Current Temperature: 323 Kelvin (50 Celsius) 00:11:56.935 0000:00:11.0: Current Temperature: 323 Kelvin (50 Celsius) 00:11:56.935 0000:00:13.0: Current Temperature: 323 Kelvin (50 Celsius) 00:11:56.935 0000:00:12.0: Current Temperature: 323 Kelvin (50 Celsius) 00:11:56.935 Cleaning up... 00:11:56.935 00:11:56.935 real 0m0.301s 00:11:56.935 user 0m0.109s 00:11:56.935 sys 0m0.148s 00:11:56.935 18:10:07 nvme.nvme_single_aen -- common/autotest_common.sh@1130 -- # xtrace_disable 00:11:56.935 18:10:07 nvme.nvme_single_aen -- common/autotest_common.sh@10 -- # set +x 00:11:56.935 ************************************ 00:11:56.935 END TEST nvme_single_aen 00:11:56.935 ************************************ 00:11:56.935 18:10:07 nvme -- nvme/nvme.sh@95 -- # run_test nvme_doorbell_aers nvme_doorbell_aers 00:11:56.935 18:10:07 nvme -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:11:56.935 18:10:07 nvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:11:56.935 18:10:07 nvme -- common/autotest_common.sh@10 -- # set +x 00:11:56.935 ************************************ 00:11:56.935 START TEST nvme_doorbell_aers 00:11:56.935 ************************************ 00:11:56.935 18:10:07 nvme.nvme_doorbell_aers -- common/autotest_common.sh@1129 -- # nvme_doorbell_aers 00:11:56.935 18:10:07 nvme.nvme_doorbell_aers -- nvme/nvme.sh@70 -- # bdfs=() 00:11:56.935 18:10:07 nvme.nvme_doorbell_aers -- nvme/nvme.sh@70 -- # local bdfs bdf 00:11:56.935 18:10:07 nvme.nvme_doorbell_aers -- nvme/nvme.sh@71 -- # bdfs=($(get_nvme_bdfs)) 00:11:56.935 18:10:07 nvme.nvme_doorbell_aers -- nvme/nvme.sh@71 -- # get_nvme_bdfs 00:11:56.935 18:10:07 nvme.nvme_doorbell_aers -- common/autotest_common.sh@1498 -- # bdfs=() 00:11:56.935 18:10:07 nvme.nvme_doorbell_aers -- common/autotest_common.sh@1498 -- # local bdfs 00:11:56.935 18:10:07 nvme.nvme_doorbell_aers -- common/autotest_common.sh@1499 -- # bdfs=($("$rootdir/scripts/gen_nvme.sh" | jq -r '.config[].params.traddr')) 00:11:56.935 18:10:07 nvme.nvme_doorbell_aers -- common/autotest_common.sh@1499 -- # /home/vagrant/spdk_repo/spdk/scripts/gen_nvme.sh 00:11:56.935 18:10:07 nvme.nvme_doorbell_aers -- common/autotest_common.sh@1499 -- # jq -r '.config[].params.traddr' 00:11:57.195 18:10:07 nvme.nvme_doorbell_aers -- common/autotest_common.sh@1500 -- # (( 4 == 0 )) 00:11:57.195 18:10:07 nvme.nvme_doorbell_aers -- common/autotest_common.sh@1504 -- # printf '%s\n' 0000:00:10.0 0000:00:11.0 0000:00:12.0 0000:00:13.0 00:11:57.195 18:10:07 nvme.nvme_doorbell_aers -- nvme/nvme.sh@72 -- # for bdf in "${bdfs[@]}" 00:11:57.195 18:10:07 nvme.nvme_doorbell_aers -- nvme/nvme.sh@73 -- # timeout --preserve-status 10 /home/vagrant/spdk_repo/spdk/test/nvme/doorbell_aers/doorbell_aers -r 'trtype:PCIe traddr:0000:00:10.0' 00:11:57.456 [2024-12-06 18:10:07.865300] nvme_pcie_common.c: 321:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 64518) is not found. Dropping the request. 00:12:07.479 Executing: test_write_invalid_db 00:12:07.479 Waiting for AER completion... 00:12:07.479 Failure: test_write_invalid_db 00:12:07.479 00:12:07.479 Executing: test_invalid_db_write_overflow_sq 00:12:07.479 Waiting for AER completion... 00:12:07.479 Failure: test_invalid_db_write_overflow_sq 00:12:07.479 00:12:07.479 Executing: test_invalid_db_write_overflow_cq 00:12:07.479 Waiting for AER completion... 00:12:07.479 Failure: test_invalid_db_write_overflow_cq 00:12:07.479 00:12:07.479 18:10:17 nvme.nvme_doorbell_aers -- nvme/nvme.sh@72 -- # for bdf in "${bdfs[@]}" 00:12:07.479 18:10:17 nvme.nvme_doorbell_aers -- nvme/nvme.sh@73 -- # timeout --preserve-status 10 /home/vagrant/spdk_repo/spdk/test/nvme/doorbell_aers/doorbell_aers -r 'trtype:PCIe traddr:0000:00:11.0' 00:12:07.479 [2024-12-06 18:10:17.905133] nvme_pcie_common.c: 321:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 64518) is not found. Dropping the request. 00:12:17.511 Executing: test_write_invalid_db 00:12:17.511 Waiting for AER completion... 00:12:17.511 Failure: test_write_invalid_db 00:12:17.511 00:12:17.511 Executing: test_invalid_db_write_overflow_sq 00:12:17.511 Waiting for AER completion... 00:12:17.511 Failure: test_invalid_db_write_overflow_sq 00:12:17.511 00:12:17.511 Executing: test_invalid_db_write_overflow_cq 00:12:17.511 Waiting for AER completion... 00:12:17.511 Failure: test_invalid_db_write_overflow_cq 00:12:17.511 00:12:17.511 18:10:27 nvme.nvme_doorbell_aers -- nvme/nvme.sh@72 -- # for bdf in "${bdfs[@]}" 00:12:17.511 18:10:27 nvme.nvme_doorbell_aers -- nvme/nvme.sh@73 -- # timeout --preserve-status 10 /home/vagrant/spdk_repo/spdk/test/nvme/doorbell_aers/doorbell_aers -r 'trtype:PCIe traddr:0000:00:12.0' 00:12:17.511 [2024-12-06 18:10:27.966916] nvme_pcie_common.c: 321:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 64518) is not found. Dropping the request. 00:12:27.490 Executing: test_write_invalid_db 00:12:27.490 Waiting for AER completion... 00:12:27.490 Failure: test_write_invalid_db 00:12:27.490 00:12:27.490 Executing: test_invalid_db_write_overflow_sq 00:12:27.490 Waiting for AER completion... 00:12:27.490 Failure: test_invalid_db_write_overflow_sq 00:12:27.490 00:12:27.490 Executing: test_invalid_db_write_overflow_cq 00:12:27.490 Waiting for AER completion... 00:12:27.490 Failure: test_invalid_db_write_overflow_cq 00:12:27.490 00:12:27.490 18:10:37 nvme.nvme_doorbell_aers -- nvme/nvme.sh@72 -- # for bdf in "${bdfs[@]}" 00:12:27.490 18:10:37 nvme.nvme_doorbell_aers -- nvme/nvme.sh@73 -- # timeout --preserve-status 10 /home/vagrant/spdk_repo/spdk/test/nvme/doorbell_aers/doorbell_aers -r 'trtype:PCIe traddr:0000:00:13.0' 00:12:27.490 [2024-12-06 18:10:38.015874] nvme_pcie_common.c: 321:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 64518) is not found. Dropping the request. 00:12:37.471 Executing: test_write_invalid_db 00:12:37.471 Waiting for AER completion... 00:12:37.471 Failure: test_write_invalid_db 00:12:37.471 00:12:37.471 Executing: test_invalid_db_write_overflow_sq 00:12:37.471 Waiting for AER completion... 00:12:37.471 Failure: test_invalid_db_write_overflow_sq 00:12:37.471 00:12:37.471 Executing: test_invalid_db_write_overflow_cq 00:12:37.471 Waiting for AER completion... 00:12:37.471 Failure: test_invalid_db_write_overflow_cq 00:12:37.471 00:12:37.471 00:12:37.471 real 0m40.316s 00:12:37.471 user 0m28.669s 00:12:37.471 sys 0m11.272s 00:12:37.471 18:10:47 nvme.nvme_doorbell_aers -- common/autotest_common.sh@1130 -- # xtrace_disable 00:12:37.471 18:10:47 nvme.nvme_doorbell_aers -- common/autotest_common.sh@10 -- # set +x 00:12:37.471 ************************************ 00:12:37.471 END TEST nvme_doorbell_aers 00:12:37.471 ************************************ 00:12:37.471 18:10:47 nvme -- nvme/nvme.sh@97 -- # uname 00:12:37.471 18:10:47 nvme -- nvme/nvme.sh@97 -- # '[' Linux '!=' FreeBSD ']' 00:12:37.471 18:10:47 nvme -- nvme/nvme.sh@98 -- # run_test nvme_multi_aen /home/vagrant/spdk_repo/spdk/test/nvme/aer/aer -m -T -i 0 00:12:37.471 18:10:47 nvme -- common/autotest_common.sh@1105 -- # '[' 6 -le 1 ']' 00:12:37.471 18:10:47 nvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:12:37.471 18:10:47 nvme -- common/autotest_common.sh@10 -- # set +x 00:12:37.471 ************************************ 00:12:37.471 START TEST nvme_multi_aen 00:12:37.471 ************************************ 00:12:37.471 18:10:47 nvme.nvme_multi_aen -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/nvme/aer/aer -m -T -i 0 00:12:37.729 [2024-12-06 18:10:48.085932] nvme_pcie_common.c: 321:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 64518) is not found. Dropping the request. 00:12:37.729 [2024-12-06 18:10:48.086032] nvme_pcie_common.c: 321:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 64518) is not found. Dropping the request. 00:12:37.730 [2024-12-06 18:10:48.086049] nvme_pcie_common.c: 321:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 64518) is not found. Dropping the request. 00:12:37.730 [2024-12-06 18:10:48.087916] nvme_pcie_common.c: 321:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 64518) is not found. Dropping the request. 00:12:37.730 [2024-12-06 18:10:48.087959] nvme_pcie_common.c: 321:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 64518) is not found. Dropping the request. 00:12:37.730 [2024-12-06 18:10:48.087973] nvme_pcie_common.c: 321:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 64518) is not found. Dropping the request. 00:12:37.730 [2024-12-06 18:10:48.089239] nvme_pcie_common.c: 321:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 64518) is not found. Dropping the request. 00:12:37.730 [2024-12-06 18:10:48.089288] nvme_pcie_common.c: 321:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 64518) is not found. Dropping the request. 00:12:37.730 [2024-12-06 18:10:48.089303] nvme_pcie_common.c: 321:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 64518) is not found. Dropping the request. 00:12:37.730 [2024-12-06 18:10:48.090743] nvme_pcie_common.c: 321:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 64518) is not found. Dropping the request. 00:12:37.730 [2024-12-06 18:10:48.090782] nvme_pcie_common.c: 321:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 64518) is not found. Dropping the request. 00:12:37.730 [2024-12-06 18:10:48.090796] nvme_pcie_common.c: 321:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 64518) is not found. Dropping the request. 00:12:37.730 Child process pid: 65038 00:12:37.988 [Child] Asynchronous Event Request test 00:12:37.988 [Child] Attached to 0000:00:10.0 00:12:37.988 [Child] Attached to 0000:00:11.0 00:12:37.988 [Child] Attached to 0000:00:13.0 00:12:37.988 [Child] Attached to 0000:00:12.0 00:12:37.988 [Child] Registering asynchronous event callbacks... 00:12:37.988 [Child] Getting orig temperature thresholds of all controllers 00:12:37.988 [Child] 0000:00:11.0: original temperature threshold: 343 Kelvin (70 Celsius) 00:12:37.988 [Child] 0000:00:13.0: original temperature threshold: 343 Kelvin (70 Celsius) 00:12:37.988 [Child] 0000:00:12.0: original temperature threshold: 343 Kelvin (70 Celsius) 00:12:37.988 [Child] 0000:00:10.0: original temperature threshold: 343 Kelvin (70 Celsius) 00:12:37.988 [Child] Waiting for all controllers to trigger AER and reset threshold 00:12:37.988 [Child] 0000:00:10.0: aer_cb for log page 2, aen_event_type: 0x01, aen_event_info: 0x01 00:12:37.988 [Child] 0000:00:11.0: aer_cb for log page 2, aen_event_type: 0x01, aen_event_info: 0x01 00:12:37.988 [Child] 0000:00:13.0: aer_cb for log page 2, aen_event_type: 0x01, aen_event_info: 0x01 00:12:37.988 [Child] 0000:00:12.0: aer_cb for log page 2, aen_event_type: 0x01, aen_event_info: 0x01 00:12:37.988 [Child] 0000:00:10.0: Current Temperature: 323 Kelvin (50 Celsius) 00:12:37.988 [Child] 0000:00:11.0: Current Temperature: 323 Kelvin (50 Celsius) 00:12:37.988 [Child] 0000:00:13.0: Current Temperature: 323 Kelvin (50 Celsius) 00:12:37.988 [Child] 0000:00:12.0: Current Temperature: 323 Kelvin (50 Celsius) 00:12:37.988 [Child] Cleaning up... 00:12:37.988 Asynchronous Event Request test 00:12:37.988 Attached to 0000:00:10.0 00:12:37.988 Attached to 0000:00:11.0 00:12:37.988 Attached to 0000:00:13.0 00:12:37.988 Attached to 0000:00:12.0 00:12:37.988 Reset controller to setup AER completions for this process 00:12:37.988 Registering asynchronous event callbacks... 00:12:37.989 Getting orig temperature thresholds of all controllers 00:12:37.989 0000:00:10.0: original temperature threshold: 343 Kelvin (70 Celsius) 00:12:37.989 0000:00:11.0: original temperature threshold: 343 Kelvin (70 Celsius) 00:12:37.989 0000:00:13.0: original temperature threshold: 343 Kelvin (70 Celsius) 00:12:37.989 0000:00:12.0: original temperature threshold: 343 Kelvin (70 Celsius) 00:12:37.989 Setting all controllers temperature threshold low to trigger AER 00:12:37.989 Waiting for all controllers temperature threshold to be set lower 00:12:37.989 0000:00:10.0: aer_cb for log page 2, aen_event_type: 0x01, aen_event_info: 0x01 00:12:37.989 aer_cb - Resetting Temp Threshold for device: 0000:00:10.0 00:12:37.989 0000:00:11.0: aer_cb for log page 2, aen_event_type: 0x01, aen_event_info: 0x01 00:12:37.989 aer_cb - Resetting Temp Threshold for device: 0000:00:11.0 00:12:37.989 0000:00:13.0: aer_cb for log page 2, aen_event_type: 0x01, aen_event_info: 0x01 00:12:37.989 aer_cb - Resetting Temp Threshold for device: 0000:00:13.0 00:12:37.989 0000:00:12.0: aer_cb for log page 2, aen_event_type: 0x01, aen_event_info: 0x01 00:12:37.989 aer_cb - Resetting Temp Threshold for device: 0000:00:12.0 00:12:37.989 Waiting for all controllers to trigger AER and reset threshold 00:12:37.989 0000:00:10.0: Current Temperature: 323 Kelvin (50 Celsius) 00:12:37.989 0000:00:11.0: Current Temperature: 323 Kelvin (50 Celsius) 00:12:37.989 0000:00:13.0: Current Temperature: 323 Kelvin (50 Celsius) 00:12:37.989 0000:00:12.0: Current Temperature: 323 Kelvin (50 Celsius) 00:12:37.989 Cleaning up... 00:12:37.989 00:12:37.989 real 0m0.648s 00:12:37.989 user 0m0.211s 00:12:37.989 sys 0m0.319s 00:12:37.989 18:10:48 nvme.nvme_multi_aen -- common/autotest_common.sh@1130 -- # xtrace_disable 00:12:37.989 18:10:48 nvme.nvme_multi_aen -- common/autotest_common.sh@10 -- # set +x 00:12:37.989 ************************************ 00:12:37.989 END TEST nvme_multi_aen 00:12:37.989 ************************************ 00:12:37.989 18:10:48 nvme -- nvme/nvme.sh@99 -- # run_test nvme_startup /home/vagrant/spdk_repo/spdk/test/nvme/startup/startup -t 1000000 00:12:37.989 18:10:48 nvme -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:12:37.989 18:10:48 nvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:12:37.989 18:10:48 nvme -- common/autotest_common.sh@10 -- # set +x 00:12:37.989 ************************************ 00:12:37.989 START TEST nvme_startup 00:12:37.989 ************************************ 00:12:37.989 18:10:48 nvme.nvme_startup -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/nvme/startup/startup -t 1000000 00:12:38.247 Initializing NVMe Controllers 00:12:38.247 Attached to 0000:00:10.0 00:12:38.247 Attached to 0000:00:11.0 00:12:38.247 Attached to 0000:00:13.0 00:12:38.247 Attached to 0000:00:12.0 00:12:38.247 Initialization complete. 00:12:38.247 Time used:195934.406 (us). 00:12:38.247 00:12:38.247 real 0m0.296s 00:12:38.247 user 0m0.105s 00:12:38.247 sys 0m0.147s 00:12:38.506 18:10:48 nvme.nvme_startup -- common/autotest_common.sh@1130 -- # xtrace_disable 00:12:38.506 18:10:48 nvme.nvme_startup -- common/autotest_common.sh@10 -- # set +x 00:12:38.506 ************************************ 00:12:38.506 END TEST nvme_startup 00:12:38.506 ************************************ 00:12:38.506 18:10:48 nvme -- nvme/nvme.sh@100 -- # run_test nvme_multi_secondary nvme_multi_secondary 00:12:38.506 18:10:48 nvme -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:12:38.506 18:10:48 nvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:12:38.506 18:10:48 nvme -- common/autotest_common.sh@10 -- # set +x 00:12:38.506 ************************************ 00:12:38.506 START TEST nvme_multi_secondary 00:12:38.506 ************************************ 00:12:38.506 18:10:48 nvme.nvme_multi_secondary -- common/autotest_common.sh@1129 -- # nvme_multi_secondary 00:12:38.506 18:10:48 nvme.nvme_multi_secondary -- nvme/nvme.sh@52 -- # pid0=65094 00:12:38.506 18:10:48 nvme.nvme_multi_secondary -- nvme/nvme.sh@51 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -i 0 -q 16 -w read -o 4096 -t 5 -c 0x1 00:12:38.506 18:10:48 nvme.nvme_multi_secondary -- nvme/nvme.sh@54 -- # pid1=65095 00:12:38.506 18:10:48 nvme.nvme_multi_secondary -- nvme/nvme.sh@55 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -i 0 -q 16 -w read -o 4096 -t 3 -c 0x4 00:12:38.506 18:10:48 nvme.nvme_multi_secondary -- nvme/nvme.sh@53 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -i 0 -q 16 -w read -o 4096 -t 3 -c 0x2 00:12:41.790 Initializing NVMe Controllers 00:12:41.790 Attached to NVMe Controller at 0000:00:10.0 [1b36:0010] 00:12:41.790 Attached to NVMe Controller at 0000:00:11.0 [1b36:0010] 00:12:41.790 Attached to NVMe Controller at 0000:00:13.0 [1b36:0010] 00:12:41.790 Attached to NVMe Controller at 0000:00:12.0 [1b36:0010] 00:12:41.790 Associating PCIE (0000:00:10.0) NSID 1 with lcore 1 00:12:41.790 Associating PCIE (0000:00:11.0) NSID 1 with lcore 1 00:12:41.790 Associating PCIE (0000:00:13.0) NSID 1 with lcore 1 00:12:41.790 Associating PCIE (0000:00:12.0) NSID 1 with lcore 1 00:12:41.790 Associating PCIE (0000:00:12.0) NSID 2 with lcore 1 00:12:41.790 Associating PCIE (0000:00:12.0) NSID 3 with lcore 1 00:12:41.790 Initialization complete. Launching workers. 00:12:41.790 ======================================================== 00:12:41.790 Latency(us) 00:12:41.790 Device Information : IOPS MiB/s Average min max 00:12:41.790 PCIE (0000:00:10.0) NSID 1 from core 1: 5076.44 19.83 3149.48 1178.46 14611.74 00:12:41.791 PCIE (0000:00:11.0) NSID 1 from core 1: 5076.44 19.83 3151.40 1204.32 14440.89 00:12:41.791 PCIE (0000:00:13.0) NSID 1 from core 1: 5076.44 19.83 3151.51 1041.22 15757.58 00:12:41.791 PCIE (0000:00:12.0) NSID 1 from core 1: 5076.44 19.83 3151.58 1248.93 15593.25 00:12:41.791 PCIE (0000:00:12.0) NSID 2 from core 1: 5076.44 19.83 3151.61 1254.45 15429.90 00:12:41.791 PCIE (0000:00:12.0) NSID 3 from core 1: 5076.44 19.83 3151.67 1127.06 14562.26 00:12:41.791 ======================================================== 00:12:41.791 Total : 30458.62 118.98 3151.21 1041.22 15757.58 00:12:41.791 00:12:42.048 Initializing NVMe Controllers 00:12:42.048 Attached to NVMe Controller at 0000:00:10.0 [1b36:0010] 00:12:42.048 Attached to NVMe Controller at 0000:00:11.0 [1b36:0010] 00:12:42.048 Attached to NVMe Controller at 0000:00:13.0 [1b36:0010] 00:12:42.048 Attached to NVMe Controller at 0000:00:12.0 [1b36:0010] 00:12:42.048 Associating PCIE (0000:00:10.0) NSID 1 with lcore 2 00:12:42.048 Associating PCIE (0000:00:11.0) NSID 1 with lcore 2 00:12:42.048 Associating PCIE (0000:00:13.0) NSID 1 with lcore 2 00:12:42.048 Associating PCIE (0000:00:12.0) NSID 1 with lcore 2 00:12:42.048 Associating PCIE (0000:00:12.0) NSID 2 with lcore 2 00:12:42.048 Associating PCIE (0000:00:12.0) NSID 3 with lcore 2 00:12:42.048 Initialization complete. Launching workers. 00:12:42.048 ======================================================== 00:12:42.049 Latency(us) 00:12:42.049 Device Information : IOPS MiB/s Average min max 00:12:42.049 PCIE (0000:00:10.0) NSID 1 from core 2: 3199.95 12.50 4998.29 1211.99 14729.52 00:12:42.049 PCIE (0000:00:11.0) NSID 1 from core 2: 3199.95 12.50 4999.54 1212.31 14645.98 00:12:42.049 PCIE (0000:00:13.0) NSID 1 from core 2: 3199.95 12.50 4999.41 1200.65 12979.57 00:12:42.049 PCIE (0000:00:12.0) NSID 1 from core 2: 3199.95 12.50 4999.34 1211.32 12842.54 00:12:42.049 PCIE (0000:00:12.0) NSID 2 from core 2: 3199.95 12.50 5005.91 1214.86 12747.48 00:12:42.049 PCIE (0000:00:12.0) NSID 3 from core 2: 3199.95 12.50 5005.85 1222.98 13005.97 00:12:42.049 ======================================================== 00:12:42.049 Total : 19199.68 75.00 5001.39 1200.65 14729.52 00:12:42.049 00:12:42.049 18:10:52 nvme.nvme_multi_secondary -- nvme/nvme.sh@56 -- # wait 65094 00:12:43.947 Initializing NVMe Controllers 00:12:43.947 Attached to NVMe Controller at 0000:00:10.0 [1b36:0010] 00:12:43.947 Attached to NVMe Controller at 0000:00:11.0 [1b36:0010] 00:12:43.947 Attached to NVMe Controller at 0000:00:13.0 [1b36:0010] 00:12:43.947 Attached to NVMe Controller at 0000:00:12.0 [1b36:0010] 00:12:43.947 Associating PCIE (0000:00:10.0) NSID 1 with lcore 0 00:12:43.947 Associating PCIE (0000:00:11.0) NSID 1 with lcore 0 00:12:43.947 Associating PCIE (0000:00:13.0) NSID 1 with lcore 0 00:12:43.947 Associating PCIE (0000:00:12.0) NSID 1 with lcore 0 00:12:43.947 Associating PCIE (0000:00:12.0) NSID 2 with lcore 0 00:12:43.947 Associating PCIE (0000:00:12.0) NSID 3 with lcore 0 00:12:43.947 Initialization complete. Launching workers. 00:12:43.947 ======================================================== 00:12:43.947 Latency(us) 00:12:43.947 Device Information : IOPS MiB/s Average min max 00:12:43.947 PCIE (0000:00:10.0) NSID 1 from core 0: 8083.14 31.57 1977.75 931.50 8150.58 00:12:43.947 PCIE (0000:00:11.0) NSID 1 from core 0: 8083.14 31.57 1978.95 947.90 9242.26 00:12:43.947 PCIE (0000:00:13.0) NSID 1 from core 0: 8083.14 31.57 1978.92 952.02 7096.45 00:12:43.947 PCIE (0000:00:12.0) NSID 1 from core 0: 8083.14 31.57 1978.89 931.95 7734.72 00:12:43.947 PCIE (0000:00:12.0) NSID 2 from core 0: 8083.14 31.57 1978.86 873.68 7984.05 00:12:43.947 PCIE (0000:00:12.0) NSID 3 from core 0: 8083.14 31.57 1978.84 784.09 7912.12 00:12:43.947 ======================================================== 00:12:43.947 Total : 48498.82 189.45 1978.70 784.09 9242.26 00:12:43.947 00:12:43.947 18:10:54 nvme.nvme_multi_secondary -- nvme/nvme.sh@57 -- # wait 65095 00:12:43.947 18:10:54 nvme.nvme_multi_secondary -- nvme/nvme.sh@61 -- # pid0=65164 00:12:43.947 18:10:54 nvme.nvme_multi_secondary -- nvme/nvme.sh@60 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -i 0 -q 16 -w read -o 4096 -t 3 -c 0x1 00:12:43.947 18:10:54 nvme.nvme_multi_secondary -- nvme/nvme.sh@63 -- # pid1=65165 00:12:43.947 18:10:54 nvme.nvme_multi_secondary -- nvme/nvme.sh@62 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -i 0 -q 16 -w read -o 4096 -t 3 -c 0x2 00:12:43.947 18:10:54 nvme.nvme_multi_secondary -- nvme/nvme.sh@64 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -i 0 -q 16 -w read -o 4096 -t 5 -c 0x4 00:12:47.287 Initializing NVMe Controllers 00:12:47.287 Attached to NVMe Controller at 0000:00:10.0 [1b36:0010] 00:12:47.287 Attached to NVMe Controller at 0000:00:11.0 [1b36:0010] 00:12:47.287 Attached to NVMe Controller at 0000:00:13.0 [1b36:0010] 00:12:47.287 Attached to NVMe Controller at 0000:00:12.0 [1b36:0010] 00:12:47.287 Associating PCIE (0000:00:10.0) NSID 1 with lcore 0 00:12:47.287 Associating PCIE (0000:00:11.0) NSID 1 with lcore 0 00:12:47.287 Associating PCIE (0000:00:13.0) NSID 1 with lcore 0 00:12:47.287 Associating PCIE (0000:00:12.0) NSID 1 with lcore 0 00:12:47.287 Associating PCIE (0000:00:12.0) NSID 2 with lcore 0 00:12:47.287 Associating PCIE (0000:00:12.0) NSID 3 with lcore 0 00:12:47.287 Initialization complete. Launching workers. 00:12:47.287 ======================================================== 00:12:47.287 Latency(us) 00:12:47.287 Device Information : IOPS MiB/s Average min max 00:12:47.287 PCIE (0000:00:10.0) NSID 1 from core 0: 5120.94 20.00 3122.04 938.30 7211.98 00:12:47.287 PCIE (0000:00:11.0) NSID 1 from core 0: 5120.94 20.00 3124.16 958.16 7622.25 00:12:47.287 PCIE (0000:00:13.0) NSID 1 from core 0: 5120.94 20.00 3124.25 963.36 7452.01 00:12:47.287 PCIE (0000:00:12.0) NSID 1 from core 0: 5120.94 20.00 3124.68 953.13 7123.58 00:12:47.287 PCIE (0000:00:12.0) NSID 2 from core 0: 5120.94 20.00 3125.40 935.91 7424.03 00:12:47.287 PCIE (0000:00:12.0) NSID 3 from core 0: 5126.27 20.02 3122.61 942.96 7310.15 00:12:47.287 ======================================================== 00:12:47.287 Total : 30730.96 120.04 3123.86 935.91 7622.25 00:12:47.287 00:12:47.287 Initializing NVMe Controllers 00:12:47.287 Attached to NVMe Controller at 0000:00:10.0 [1b36:0010] 00:12:47.287 Attached to NVMe Controller at 0000:00:11.0 [1b36:0010] 00:12:47.287 Attached to NVMe Controller at 0000:00:13.0 [1b36:0010] 00:12:47.287 Attached to NVMe Controller at 0000:00:12.0 [1b36:0010] 00:12:47.287 Associating PCIE (0000:00:10.0) NSID 1 with lcore 1 00:12:47.287 Associating PCIE (0000:00:11.0) NSID 1 with lcore 1 00:12:47.287 Associating PCIE (0000:00:13.0) NSID 1 with lcore 1 00:12:47.287 Associating PCIE (0000:00:12.0) NSID 1 with lcore 1 00:12:47.287 Associating PCIE (0000:00:12.0) NSID 2 with lcore 1 00:12:47.287 Associating PCIE (0000:00:12.0) NSID 3 with lcore 1 00:12:47.287 Initialization complete. Launching workers. 00:12:47.287 ======================================================== 00:12:47.287 Latency(us) 00:12:47.287 Device Information : IOPS MiB/s Average min max 00:12:47.287 PCIE (0000:00:10.0) NSID 1 from core 1: 5000.14 19.53 3197.34 1058.96 13541.87 00:12:47.287 PCIE (0000:00:11.0) NSID 1 from core 1: 5000.14 19.53 3199.40 1065.21 14025.63 00:12:47.287 PCIE (0000:00:13.0) NSID 1 from core 1: 5000.14 19.53 3199.46 1089.43 13790.44 00:12:47.287 PCIE (0000:00:12.0) NSID 1 from core 1: 5000.14 19.53 3199.38 1095.56 13549.30 00:12:47.287 PCIE (0000:00:12.0) NSID 2 from core 1: 5000.14 19.53 3199.37 1081.90 13697.85 00:12:47.287 PCIE (0000:00:12.0) NSID 3 from core 1: 5000.14 19.53 3199.31 1097.02 13071.07 00:12:47.287 ======================================================== 00:12:47.287 Total : 30000.87 117.19 3199.04 1058.96 14025.63 00:12:47.287 00:12:49.818 Initializing NVMe Controllers 00:12:49.818 Attached to NVMe Controller at 0000:00:10.0 [1b36:0010] 00:12:49.818 Attached to NVMe Controller at 0000:00:11.0 [1b36:0010] 00:12:49.818 Attached to NVMe Controller at 0000:00:13.0 [1b36:0010] 00:12:49.818 Attached to NVMe Controller at 0000:00:12.0 [1b36:0010] 00:12:49.818 Associating PCIE (0000:00:10.0) NSID 1 with lcore 2 00:12:49.818 Associating PCIE (0000:00:11.0) NSID 1 with lcore 2 00:12:49.818 Associating PCIE (0000:00:13.0) NSID 1 with lcore 2 00:12:49.818 Associating PCIE (0000:00:12.0) NSID 1 with lcore 2 00:12:49.818 Associating PCIE (0000:00:12.0) NSID 2 with lcore 2 00:12:49.818 Associating PCIE (0000:00:12.0) NSID 3 with lcore 2 00:12:49.818 Initialization complete. Launching workers. 00:12:49.818 ======================================================== 00:12:49.818 Latency(us) 00:12:49.818 Device Information : IOPS MiB/s Average min max 00:12:49.818 PCIE (0000:00:10.0) NSID 1 from core 2: 3143.95 12.28 5087.61 1067.88 13548.21 00:12:49.818 PCIE (0000:00:11.0) NSID 1 from core 2: 3143.95 12.28 5088.75 1094.51 13483.77 00:12:49.818 PCIE (0000:00:13.0) NSID 1 from core 2: 3143.95 12.28 5087.40 1089.45 15667.95 00:12:49.818 PCIE (0000:00:12.0) NSID 1 from core 2: 3143.95 12.28 5084.46 1069.58 13826.15 00:12:49.818 PCIE (0000:00:12.0) NSID 2 from core 2: 3143.95 12.28 5084.36 1083.03 14404.12 00:12:49.818 PCIE (0000:00:12.0) NSID 3 from core 2: 3143.95 12.28 5084.27 1018.22 14367.22 00:12:49.818 ======================================================== 00:12:49.818 Total : 18863.72 73.69 5086.14 1018.22 15667.95 00:12:49.818 00:12:49.818 18:10:59 nvme.nvme_multi_secondary -- nvme/nvme.sh@65 -- # wait 65164 00:12:49.818 18:10:59 nvme.nvme_multi_secondary -- nvme/nvme.sh@66 -- # wait 65165 00:12:49.818 00:12:49.818 real 0m10.996s 00:12:49.818 user 0m18.577s 00:12:49.818 sys 0m1.052s 00:12:49.818 18:10:59 nvme.nvme_multi_secondary -- common/autotest_common.sh@1130 -- # xtrace_disable 00:12:49.818 ************************************ 00:12:49.818 END TEST nvme_multi_secondary 00:12:49.818 ************************************ 00:12:49.818 18:10:59 nvme.nvme_multi_secondary -- common/autotest_common.sh@10 -- # set +x 00:12:49.818 18:10:59 nvme -- nvme/nvme.sh@101 -- # trap - SIGINT SIGTERM EXIT 00:12:49.818 18:10:59 nvme -- nvme/nvme.sh@102 -- # kill_stub 00:12:49.818 18:10:59 nvme -- common/autotest_common.sh@1093 -- # [[ -e /proc/64103 ]] 00:12:49.818 18:10:59 nvme -- common/autotest_common.sh@1094 -- # kill 64103 00:12:49.818 18:10:59 nvme -- common/autotest_common.sh@1095 -- # wait 64103 00:12:49.818 [2024-12-06 18:10:59.953253] nvme_pcie_common.c: 321:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 65037) is not found. Dropping the request. 00:12:49.818 [2024-12-06 18:10:59.953435] nvme_pcie_common.c: 321:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 65037) is not found. Dropping the request. 00:12:49.818 [2024-12-06 18:10:59.953517] nvme_pcie_common.c: 321:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 65037) is not found. Dropping the request. 00:12:49.818 [2024-12-06 18:10:59.953572] nvme_pcie_common.c: 321:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 65037) is not found. Dropping the request. 00:12:49.818 [2024-12-06 18:10:59.960575] nvme_pcie_common.c: 321:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 65037) is not found. Dropping the request. 00:12:49.818 [2024-12-06 18:10:59.960693] nvme_pcie_common.c: 321:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 65037) is not found. Dropping the request. 00:12:49.818 [2024-12-06 18:10:59.960740] nvme_pcie_common.c: 321:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 65037) is not found. Dropping the request. 00:12:49.818 [2024-12-06 18:10:59.960808] nvme_pcie_common.c: 321:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 65037) is not found. Dropping the request. 00:12:49.818 [2024-12-06 18:10:59.965541] nvme_pcie_common.c: 321:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 65037) is not found. Dropping the request. 00:12:49.818 [2024-12-06 18:10:59.965612] nvme_pcie_common.c: 321:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 65037) is not found. Dropping the request. 00:12:49.818 [2024-12-06 18:10:59.965641] nvme_pcie_common.c: 321:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 65037) is not found. Dropping the request. 00:12:49.818 [2024-12-06 18:10:59.965672] nvme_pcie_common.c: 321:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 65037) is not found. Dropping the request. 00:12:49.818 [2024-12-06 18:10:59.970202] nvme_pcie_common.c: 321:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 65037) is not found. Dropping the request. 00:12:49.818 [2024-12-06 18:10:59.970292] nvme_pcie_common.c: 321:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 65037) is not found. Dropping the request. 00:12:49.818 [2024-12-06 18:10:59.970323] nvme_pcie_common.c: 321:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 65037) is not found. Dropping the request. 00:12:49.818 [2024-12-06 18:10:59.970355] nvme_pcie_common.c: 321:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 65037) is not found. Dropping the request. 00:12:49.818 18:11:00 nvme -- common/autotest_common.sh@1097 -- # rm -f /var/run/spdk_stub0 00:12:49.818 18:11:00 nvme -- common/autotest_common.sh@1101 -- # echo 2 00:12:49.818 18:11:00 nvme -- nvme/nvme.sh@105 -- # run_test bdev_nvme_reset_stuck_adm_cmd /home/vagrant/spdk_repo/spdk/test/nvme/nvme_reset_stuck_adm_cmd.sh 00:12:49.818 18:11:00 nvme -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:12:49.818 18:11:00 nvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:12:49.818 18:11:00 nvme -- common/autotest_common.sh@10 -- # set +x 00:12:49.818 ************************************ 00:12:49.818 START TEST bdev_nvme_reset_stuck_adm_cmd 00:12:49.818 ************************************ 00:12:49.818 18:11:00 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/nvme/nvme_reset_stuck_adm_cmd.sh 00:12:49.818 * Looking for test storage... 00:12:49.818 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvme 00:12:49.818 18:11:00 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:12:49.818 18:11:00 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@1711 -- # lcov --version 00:12:49.818 18:11:00 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:12:49.818 18:11:00 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:12:49.818 18:11:00 nvme.bdev_nvme_reset_stuck_adm_cmd -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:12:49.819 18:11:00 nvme.bdev_nvme_reset_stuck_adm_cmd -- scripts/common.sh@333 -- # local ver1 ver1_l 00:12:49.819 18:11:00 nvme.bdev_nvme_reset_stuck_adm_cmd -- scripts/common.sh@334 -- # local ver2 ver2_l 00:12:49.819 18:11:00 nvme.bdev_nvme_reset_stuck_adm_cmd -- scripts/common.sh@336 -- # IFS=.-: 00:12:49.819 18:11:00 nvme.bdev_nvme_reset_stuck_adm_cmd -- scripts/common.sh@336 -- # read -ra ver1 00:12:49.819 18:11:00 nvme.bdev_nvme_reset_stuck_adm_cmd -- scripts/common.sh@337 -- # IFS=.-: 00:12:49.819 18:11:00 nvme.bdev_nvme_reset_stuck_adm_cmd -- scripts/common.sh@337 -- # read -ra ver2 00:12:49.819 18:11:00 nvme.bdev_nvme_reset_stuck_adm_cmd -- scripts/common.sh@338 -- # local 'op=<' 00:12:49.819 18:11:00 nvme.bdev_nvme_reset_stuck_adm_cmd -- scripts/common.sh@340 -- # ver1_l=2 00:12:49.819 18:11:00 nvme.bdev_nvme_reset_stuck_adm_cmd -- scripts/common.sh@341 -- # ver2_l=1 00:12:49.819 18:11:00 nvme.bdev_nvme_reset_stuck_adm_cmd -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:12:49.819 18:11:00 nvme.bdev_nvme_reset_stuck_adm_cmd -- scripts/common.sh@344 -- # case "$op" in 00:12:49.819 18:11:00 nvme.bdev_nvme_reset_stuck_adm_cmd -- scripts/common.sh@345 -- # : 1 00:12:49.819 18:11:00 nvme.bdev_nvme_reset_stuck_adm_cmd -- scripts/common.sh@364 -- # (( v = 0 )) 00:12:49.819 18:11:00 nvme.bdev_nvme_reset_stuck_adm_cmd -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:12:49.819 18:11:00 nvme.bdev_nvme_reset_stuck_adm_cmd -- scripts/common.sh@365 -- # decimal 1 00:12:49.819 18:11:00 nvme.bdev_nvme_reset_stuck_adm_cmd -- scripts/common.sh@353 -- # local d=1 00:12:49.819 18:11:00 nvme.bdev_nvme_reset_stuck_adm_cmd -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:12:49.819 18:11:00 nvme.bdev_nvme_reset_stuck_adm_cmd -- scripts/common.sh@355 -- # echo 1 00:12:49.819 18:11:00 nvme.bdev_nvme_reset_stuck_adm_cmd -- scripts/common.sh@365 -- # ver1[v]=1 00:12:49.819 18:11:00 nvme.bdev_nvme_reset_stuck_adm_cmd -- scripts/common.sh@366 -- # decimal 2 00:12:49.819 18:11:00 nvme.bdev_nvme_reset_stuck_adm_cmd -- scripts/common.sh@353 -- # local d=2 00:12:49.819 18:11:00 nvme.bdev_nvme_reset_stuck_adm_cmd -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:12:49.819 18:11:00 nvme.bdev_nvme_reset_stuck_adm_cmd -- scripts/common.sh@355 -- # echo 2 00:12:50.077 18:11:00 nvme.bdev_nvme_reset_stuck_adm_cmd -- scripts/common.sh@366 -- # ver2[v]=2 00:12:50.077 18:11:00 nvme.bdev_nvme_reset_stuck_adm_cmd -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:12:50.077 18:11:00 nvme.bdev_nvme_reset_stuck_adm_cmd -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:12:50.077 18:11:00 nvme.bdev_nvme_reset_stuck_adm_cmd -- scripts/common.sh@368 -- # return 0 00:12:50.077 18:11:00 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:12:50.077 18:11:00 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:12:50.077 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:12:50.077 --rc genhtml_branch_coverage=1 00:12:50.077 --rc genhtml_function_coverage=1 00:12:50.077 --rc genhtml_legend=1 00:12:50.077 --rc geninfo_all_blocks=1 00:12:50.077 --rc geninfo_unexecuted_blocks=1 00:12:50.077 00:12:50.077 ' 00:12:50.077 18:11:00 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:12:50.077 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:12:50.077 --rc genhtml_branch_coverage=1 00:12:50.077 --rc genhtml_function_coverage=1 00:12:50.077 --rc genhtml_legend=1 00:12:50.077 --rc geninfo_all_blocks=1 00:12:50.077 --rc geninfo_unexecuted_blocks=1 00:12:50.077 00:12:50.077 ' 00:12:50.077 18:11:00 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:12:50.077 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:12:50.077 --rc genhtml_branch_coverage=1 00:12:50.077 --rc genhtml_function_coverage=1 00:12:50.077 --rc genhtml_legend=1 00:12:50.077 --rc geninfo_all_blocks=1 00:12:50.077 --rc geninfo_unexecuted_blocks=1 00:12:50.077 00:12:50.077 ' 00:12:50.077 18:11:00 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:12:50.077 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:12:50.077 --rc genhtml_branch_coverage=1 00:12:50.077 --rc genhtml_function_coverage=1 00:12:50.077 --rc genhtml_legend=1 00:12:50.077 --rc geninfo_all_blocks=1 00:12:50.077 --rc geninfo_unexecuted_blocks=1 00:12:50.077 00:12:50.077 ' 00:12:50.077 18:11:00 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@18 -- # ctrlr_name=nvme0 00:12:50.078 18:11:00 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@20 -- # err_injection_timeout=15000000 00:12:50.078 18:11:00 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@22 -- # test_timeout=5 00:12:50.078 18:11:00 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@25 -- # err_injection_sct=0 00:12:50.078 18:11:00 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@27 -- # err_injection_sc=1 00:12:50.078 18:11:00 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@29 -- # get_first_nvme_bdf 00:12:50.078 18:11:00 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@1509 -- # bdfs=() 00:12:50.078 18:11:00 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@1509 -- # local bdfs 00:12:50.078 18:11:00 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@1510 -- # bdfs=($(get_nvme_bdfs)) 00:12:50.078 18:11:00 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@1510 -- # get_nvme_bdfs 00:12:50.078 18:11:00 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@1498 -- # bdfs=() 00:12:50.078 18:11:00 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@1498 -- # local bdfs 00:12:50.078 18:11:00 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@1499 -- # bdfs=($("$rootdir/scripts/gen_nvme.sh" | jq -r '.config[].params.traddr')) 00:12:50.078 18:11:00 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@1499 -- # /home/vagrant/spdk_repo/spdk/scripts/gen_nvme.sh 00:12:50.078 18:11:00 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@1499 -- # jq -r '.config[].params.traddr' 00:12:50.078 18:11:00 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@1500 -- # (( 4 == 0 )) 00:12:50.078 18:11:00 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@1504 -- # printf '%s\n' 0000:00:10.0 0000:00:11.0 0000:00:12.0 0000:00:13.0 00:12:50.078 18:11:00 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@1512 -- # echo 0000:00:10.0 00:12:50.078 18:11:00 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@29 -- # bdf=0000:00:10.0 00:12:50.078 18:11:00 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@30 -- # '[' -z 0000:00:10.0 ']' 00:12:50.078 18:11:00 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@36 -- # spdk_target_pid=65332 00:12:50.078 18:11:00 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@35 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0xF 00:12:50.078 18:11:00 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@37 -- # trap 'killprocess "$spdk_target_pid"; exit 1' SIGINT SIGTERM EXIT 00:12:50.078 18:11:00 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@38 -- # waitforlisten 65332 00:12:50.078 18:11:00 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@835 -- # '[' -z 65332 ']' 00:12:50.078 18:11:00 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:12:50.078 18:11:00 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@840 -- # local max_retries=100 00:12:50.078 18:11:00 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:12:50.078 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:12:50.078 18:11:00 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@844 -- # xtrace_disable 00:12:50.078 18:11:00 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@10 -- # set +x 00:12:50.078 [2024-12-06 18:11:00.613697] Starting SPDK v25.01-pre git sha1 1148849d6 / DPDK 24.03.0 initialization... 00:12:50.078 [2024-12-06 18:11:00.614261] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0xF --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid65332 ] 00:12:50.335 [2024-12-06 18:11:00.815492] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:12:50.592 [2024-12-06 18:11:00.936933] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:12:50.592 [2024-12-06 18:11:00.936985] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:12:50.592 [2024-12-06 18:11:00.937173] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:12:50.592 [2024-12-06 18:11:00.937210] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:12:51.523 18:11:01 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:12:51.523 18:11:01 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@868 -- # return 0 00:12:51.523 18:11:01 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@40 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t PCIe -a 0000:00:10.0 00:12:51.523 18:11:01 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:51.523 18:11:01 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@10 -- # set +x 00:12:51.523 nvme0n1 00:12:51.523 18:11:01 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:51.523 18:11:01 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@41 -- # mktemp /tmp/err_inj_XXXXX.txt 00:12:51.523 18:11:01 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@41 -- # tmp_file=/tmp/err_inj_kfHYW.txt 00:12:51.523 18:11:01 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@44 -- # rpc_cmd bdev_nvme_add_error_injection -n nvme0 --cmd-type admin --opc 10 --timeout-in-us 15000000 --err-count 1 --sct 0 --sc 1 --do_not_submit 00:12:51.523 18:11:01 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:51.523 18:11:01 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@10 -- # set +x 00:12:51.523 true 00:12:51.523 18:11:01 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:51.523 18:11:01 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@45 -- # date +%s 00:12:51.523 18:11:01 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@45 -- # start_time=1733508661 00:12:51.523 18:11:01 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@51 -- # get_feat_pid=65361 00:12:51.523 18:11:01 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_nvme_send_cmd -n nvme0 -t admin -r c2h -c CgAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAcAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA== 00:12:51.523 18:11:01 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@52 -- # trap 'killprocess "$get_feat_pid"; exit 1' SIGINT SIGTERM EXIT 00:12:51.523 18:11:01 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@55 -- # sleep 2 00:12:53.422 18:11:03 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@57 -- # rpc_cmd bdev_nvme_reset_controller nvme0 00:12:53.422 18:11:03 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:53.422 18:11:03 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@10 -- # set +x 00:12:53.422 [2024-12-06 18:11:03.921468] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [0000:00:10.0, 0] resetting controller 00:12:53.422 [2024-12-06 18:11:03.921981] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:12:53.422 [2024-12-06 18:11:03.922121] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES NUMBER OF QUEUES cid:0 cdw10:00000007 PRP1 0x0 PRP2 0x0 00:12:53.422 [2024-12-06 18:11:03.922330] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:12:53.422 [2024-12-06 18:11:03.924330] bdev_nvme.c:2287:bdev_nvme_reset_ctrlr_complete: *NOTICE*: [0000:00:10.0, 0] Resetting controller successful. 00:12:53.422 18:11:03 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:53.422 Waiting for RPC error injection (bdev_nvme_send_cmd) process PID: 65361 00:12:53.422 18:11:03 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@59 -- # echo 'Waiting for RPC error injection (bdev_nvme_send_cmd) process PID:' 65361 00:12:53.422 18:11:03 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@60 -- # wait 65361 00:12:53.422 18:11:03 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@61 -- # date +%s 00:12:53.422 18:11:03 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@61 -- # diff_time=2 00:12:53.422 18:11:03 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@62 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:12:53.422 18:11:03 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:53.422 18:11:03 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@10 -- # set +x 00:12:53.422 18:11:03 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:53.422 18:11:03 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@64 -- # trap - SIGINT SIGTERM EXIT 00:12:53.422 18:11:03 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@67 -- # jq -r .cpl /tmp/err_inj_kfHYW.txt 00:12:53.680 18:11:04 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@67 -- # spdk_nvme_status=AAAAAAAAAAAAAAAAAAACAA== 00:12:53.680 18:11:04 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@68 -- # base64_decode_bits AAAAAAAAAAAAAAAAAAACAA== 1 255 00:12:53.680 18:11:04 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@11 -- # local bin_array status 00:12:53.680 18:11:04 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@13 -- # bin_array=($(base64 -d <(printf '%s' "$1") | hexdump -ve '/1 "0x%02x\n"')) 00:12:53.680 18:11:04 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@13 -- # hexdump -ve '/1 "0x%02x\n"' 00:12:53.680 18:11:04 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@13 -- # base64 -d /dev/fd/63 00:12:53.680 18:11:04 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@13 -- # printf %s AAAAAAAAAAAAAAAAAAACAA== 00:12:53.680 18:11:04 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@14 -- # status=2 00:12:53.680 18:11:04 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@15 -- # printf 0x%x 1 00:12:53.680 18:11:04 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@68 -- # nvme_status_sc=0x1 00:12:53.680 18:11:04 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@69 -- # base64_decode_bits AAAAAAAAAAAAAAAAAAACAA== 9 3 00:12:53.680 18:11:04 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@11 -- # local bin_array status 00:12:53.680 18:11:04 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@13 -- # bin_array=($(base64 -d <(printf '%s' "$1") | hexdump -ve '/1 "0x%02x\n"')) 00:12:53.680 18:11:04 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@13 -- # printf %s AAAAAAAAAAAAAAAAAAACAA== 00:12:53.680 18:11:04 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@13 -- # hexdump -ve '/1 "0x%02x\n"' 00:12:53.680 18:11:04 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@13 -- # base64 -d /dev/fd/63 00:12:53.680 18:11:04 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@14 -- # status=2 00:12:53.680 18:11:04 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@15 -- # printf 0x%x 0 00:12:53.680 18:11:04 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@69 -- # nvme_status_sct=0x0 00:12:53.680 18:11:04 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@71 -- # rm -f /tmp/err_inj_kfHYW.txt 00:12:53.680 18:11:04 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@73 -- # killprocess 65332 00:12:53.680 18:11:04 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@954 -- # '[' -z 65332 ']' 00:12:53.680 18:11:04 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@958 -- # kill -0 65332 00:12:53.680 18:11:04 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@959 -- # uname 00:12:53.680 18:11:04 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:12:53.680 18:11:04 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 65332 00:12:53.680 killing process with pid 65332 00:12:53.680 18:11:04 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:12:53.680 18:11:04 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:12:53.680 18:11:04 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@972 -- # echo 'killing process with pid 65332' 00:12:53.680 18:11:04 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@973 -- # kill 65332 00:12:53.680 18:11:04 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@978 -- # wait 65332 00:12:56.220 18:11:06 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@75 -- # (( err_injection_sc != nvme_status_sc || err_injection_sct != nvme_status_sct )) 00:12:56.220 18:11:06 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@79 -- # (( diff_time > test_timeout )) 00:12:56.220 ************************************ 00:12:56.220 END TEST bdev_nvme_reset_stuck_adm_cmd 00:12:56.220 ************************************ 00:12:56.220 00:12:56.220 real 0m6.368s 00:12:56.220 user 0m22.146s 00:12:56.220 sys 0m0.822s 00:12:56.220 18:11:06 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@1130 -- # xtrace_disable 00:12:56.220 18:11:06 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@10 -- # set +x 00:12:56.220 18:11:06 nvme -- nvme/nvme.sh@107 -- # [[ y == y ]] 00:12:56.220 18:11:06 nvme -- nvme/nvme.sh@108 -- # run_test nvme_fio nvme_fio_test 00:12:56.220 18:11:06 nvme -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:12:56.220 18:11:06 nvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:12:56.220 18:11:06 nvme -- common/autotest_common.sh@10 -- # set +x 00:12:56.220 ************************************ 00:12:56.220 START TEST nvme_fio 00:12:56.220 ************************************ 00:12:56.220 18:11:06 nvme.nvme_fio -- common/autotest_common.sh@1129 -- # nvme_fio_test 00:12:56.220 18:11:06 nvme.nvme_fio -- nvme/nvme.sh@31 -- # PLUGIN_DIR=/home/vagrant/spdk_repo/spdk/app/fio/nvme 00:12:56.220 18:11:06 nvme.nvme_fio -- nvme/nvme.sh@32 -- # ran_fio=false 00:12:56.220 18:11:06 nvme.nvme_fio -- nvme/nvme.sh@33 -- # get_nvme_bdfs 00:12:56.220 18:11:06 nvme.nvme_fio -- common/autotest_common.sh@1498 -- # bdfs=() 00:12:56.220 18:11:06 nvme.nvme_fio -- common/autotest_common.sh@1498 -- # local bdfs 00:12:56.221 18:11:06 nvme.nvme_fio -- common/autotest_common.sh@1499 -- # bdfs=($("$rootdir/scripts/gen_nvme.sh" | jq -r '.config[].params.traddr')) 00:12:56.221 18:11:06 nvme.nvme_fio -- common/autotest_common.sh@1499 -- # /home/vagrant/spdk_repo/spdk/scripts/gen_nvme.sh 00:12:56.221 18:11:06 nvme.nvme_fio -- common/autotest_common.sh@1499 -- # jq -r '.config[].params.traddr' 00:12:56.221 18:11:06 nvme.nvme_fio -- common/autotest_common.sh@1500 -- # (( 4 == 0 )) 00:12:56.221 18:11:06 nvme.nvme_fio -- common/autotest_common.sh@1504 -- # printf '%s\n' 0000:00:10.0 0000:00:11.0 0000:00:12.0 0000:00:13.0 00:12:56.221 18:11:06 nvme.nvme_fio -- nvme/nvme.sh@33 -- # bdfs=('0000:00:10.0' '0000:00:11.0' '0000:00:12.0' '0000:00:13.0') 00:12:56.221 18:11:06 nvme.nvme_fio -- nvme/nvme.sh@33 -- # local bdfs bdf 00:12:56.221 18:11:06 nvme.nvme_fio -- nvme/nvme.sh@34 -- # for bdf in "${bdfs[@]}" 00:12:56.221 18:11:06 nvme.nvme_fio -- nvme/nvme.sh@35 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_identify -r 'trtype:PCIe traddr:0000:00:10.0' 00:12:56.221 18:11:06 nvme.nvme_fio -- nvme/nvme.sh@35 -- # grep -qE '^Namespace ID:[0-9]+' 00:12:56.479 18:11:07 nvme.nvme_fio -- nvme/nvme.sh@38 -- # grep -q 'Extended Data LBA' 00:12:56.479 18:11:07 nvme.nvme_fio -- nvme/nvme.sh@38 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_identify -r 'trtype:PCIe traddr:0000:00:10.0' 00:12:56.737 18:11:07 nvme.nvme_fio -- nvme/nvme.sh@41 -- # bs=4096 00:12:56.737 18:11:07 nvme.nvme_fio -- nvme/nvme.sh@43 -- # fio_nvme /home/vagrant/spdk_repo/spdk/app/fio/nvme/example_config.fio '--filename=trtype=PCIe traddr=0000.00.10.0' --bs=4096 00:12:56.737 18:11:07 nvme.nvme_fio -- common/autotest_common.sh@1364 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme /home/vagrant/spdk_repo/spdk/app/fio/nvme/example_config.fio '--filename=trtype=PCIe traddr=0000.00.10.0' --bs=4096 00:12:56.737 18:11:07 nvme.nvme_fio -- common/autotest_common.sh@1341 -- # local fio_dir=/usr/src/fio 00:12:56.737 18:11:07 nvme.nvme_fio -- common/autotest_common.sh@1343 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:12:56.737 18:11:07 nvme.nvme_fio -- common/autotest_common.sh@1343 -- # local sanitizers 00:12:56.737 18:11:07 nvme.nvme_fio -- common/autotest_common.sh@1344 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme 00:12:56.737 18:11:07 nvme.nvme_fio -- common/autotest_common.sh@1345 -- # shift 00:12:56.737 18:11:07 nvme.nvme_fio -- common/autotest_common.sh@1347 -- # local asan_lib= 00:12:56.737 18:11:07 nvme.nvme_fio -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 00:12:56.737 18:11:07 nvme.nvme_fio -- common/autotest_common.sh@1349 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme 00:12:56.737 18:11:07 nvme.nvme_fio -- common/autotest_common.sh@1349 -- # awk '{print $3}' 00:12:56.737 18:11:07 nvme.nvme_fio -- common/autotest_common.sh@1349 -- # grep libasan 00:12:56.995 18:11:07 nvme.nvme_fio -- common/autotest_common.sh@1349 -- # asan_lib=/usr/lib64/libasan.so.8 00:12:56.995 18:11:07 nvme.nvme_fio -- common/autotest_common.sh@1350 -- # [[ -n /usr/lib64/libasan.so.8 ]] 00:12:56.995 18:11:07 nvme.nvme_fio -- common/autotest_common.sh@1351 -- # break 00:12:56.995 18:11:07 nvme.nvme_fio -- common/autotest_common.sh@1356 -- # LD_PRELOAD='/usr/lib64/libasan.so.8 /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme' 00:12:56.995 18:11:07 nvme.nvme_fio -- common/autotest_common.sh@1356 -- # /usr/src/fio/fio /home/vagrant/spdk_repo/spdk/app/fio/nvme/example_config.fio '--filename=trtype=PCIe traddr=0000.00.10.0' --bs=4096 00:12:56.995 test: (g=0): rw=randrw, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk, iodepth=128 00:12:56.995 fio-3.35 00:12:56.995 Starting 1 thread 00:13:01.185 00:13:01.185 test: (groupid=0, jobs=1): err= 0: pid=65511: Fri Dec 6 18:11:10 2024 00:13:01.185 read: IOPS=21.7k, BW=84.6MiB/s (88.7MB/s)(169MiB/2001msec) 00:13:01.185 slat (nsec): min=3954, max=87248, avg=4750.21, stdev=1186.53 00:13:01.185 clat (usec): min=189, max=13558, avg=2950.33, stdev=395.71 00:13:01.185 lat (usec): min=194, max=13645, avg=2955.08, stdev=396.24 00:13:01.185 clat percentiles (usec): 00:13:01.185 | 1.00th=[ 2638], 5.00th=[ 2704], 10.00th=[ 2737], 20.00th=[ 2802], 00:13:01.185 | 30.00th=[ 2835], 40.00th=[ 2868], 50.00th=[ 2900], 60.00th=[ 2900], 00:13:01.185 | 70.00th=[ 2933], 80.00th=[ 2999], 90.00th=[ 3130], 95.00th=[ 3392], 00:13:01.185 | 99.00th=[ 4293], 99.50th=[ 4490], 99.90th=[ 8029], 99.95th=[10814], 00:13:01.185 | 99.99th=[13173] 00:13:01.185 bw ( KiB/s): min=84304, max=89152, per=100.00%, avg=87040.00, stdev=2483.51, samples=3 00:13:01.185 iops : min=21076, max=22288, avg=21760.00, stdev=620.88, samples=3 00:13:01.185 write: IOPS=21.5k, BW=84.0MiB/s (88.0MB/s)(168MiB/2001msec); 0 zone resets 00:13:01.185 slat (nsec): min=4093, max=68385, avg=4995.13, stdev=1232.62 00:13:01.185 clat (usec): min=266, max=13351, avg=2956.26, stdev=403.62 00:13:01.185 lat (usec): min=270, max=13369, avg=2961.26, stdev=404.10 00:13:01.185 clat percentiles (usec): 00:13:01.185 | 1.00th=[ 2638], 5.00th=[ 2704], 10.00th=[ 2737], 20.00th=[ 2802], 00:13:01.185 | 30.00th=[ 2835], 40.00th=[ 2868], 50.00th=[ 2900], 60.00th=[ 2933], 00:13:01.185 | 70.00th=[ 2966], 80.00th=[ 2999], 90.00th=[ 3163], 95.00th=[ 3392], 00:13:01.185 | 99.00th=[ 4293], 99.50th=[ 4555], 99.90th=[ 8586], 99.95th=[11076], 00:13:01.185 | 99.99th=[12780] 00:13:01.185 bw ( KiB/s): min=84200, max=88856, per=100.00%, avg=87232.00, stdev=2628.01, samples=3 00:13:01.185 iops : min=21050, max=22214, avg=21808.00, stdev=657.00, samples=3 00:13:01.185 lat (usec) : 250=0.01%, 500=0.01%, 750=0.01%, 1000=0.01% 00:13:01.185 lat (msec) : 2=0.05%, 4=98.04%, 10=1.81%, 20=0.07% 00:13:01.185 cpu : usr=99.15%, sys=0.20%, ctx=5, majf=0, minf=608 00:13:01.185 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.1%, >=64=99.9% 00:13:01.185 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:13:01.185 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:13:01.185 issued rwts: total=43333,43012,0,0 short=0,0,0,0 dropped=0,0,0,0 00:13:01.185 latency : target=0, window=0, percentile=100.00%, depth=128 00:13:01.185 00:13:01.185 Run status group 0 (all jobs): 00:13:01.185 READ: bw=84.6MiB/s (88.7MB/s), 84.6MiB/s-84.6MiB/s (88.7MB/s-88.7MB/s), io=169MiB (177MB), run=2001-2001msec 00:13:01.185 WRITE: bw=84.0MiB/s (88.0MB/s), 84.0MiB/s-84.0MiB/s (88.0MB/s-88.0MB/s), io=168MiB (176MB), run=2001-2001msec 00:13:01.185 ----------------------------------------------------- 00:13:01.185 Suppressions used: 00:13:01.185 count bytes template 00:13:01.185 1 32 /usr/src/fio/parse.c 00:13:01.185 1 8 libtcmalloc_minimal.so 00:13:01.185 ----------------------------------------------------- 00:13:01.185 00:13:01.185 18:11:11 nvme.nvme_fio -- nvme/nvme.sh@44 -- # ran_fio=true 00:13:01.185 18:11:11 nvme.nvme_fio -- nvme/nvme.sh@34 -- # for bdf in "${bdfs[@]}" 00:13:01.185 18:11:11 nvme.nvme_fio -- nvme/nvme.sh@35 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_identify -r 'trtype:PCIe traddr:0000:00:11.0' 00:13:01.185 18:11:11 nvme.nvme_fio -- nvme/nvme.sh@35 -- # grep -qE '^Namespace ID:[0-9]+' 00:13:01.185 18:11:11 nvme.nvme_fio -- nvme/nvme.sh@38 -- # grep -q 'Extended Data LBA' 00:13:01.185 18:11:11 nvme.nvme_fio -- nvme/nvme.sh@38 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_identify -r 'trtype:PCIe traddr:0000:00:11.0' 00:13:01.445 18:11:11 nvme.nvme_fio -- nvme/nvme.sh@41 -- # bs=4096 00:13:01.445 18:11:11 nvme.nvme_fio -- nvme/nvme.sh@43 -- # fio_nvme /home/vagrant/spdk_repo/spdk/app/fio/nvme/example_config.fio '--filename=trtype=PCIe traddr=0000.00.11.0' --bs=4096 00:13:01.445 18:11:11 nvme.nvme_fio -- common/autotest_common.sh@1364 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme /home/vagrant/spdk_repo/spdk/app/fio/nvme/example_config.fio '--filename=trtype=PCIe traddr=0000.00.11.0' --bs=4096 00:13:01.445 18:11:11 nvme.nvme_fio -- common/autotest_common.sh@1341 -- # local fio_dir=/usr/src/fio 00:13:01.445 18:11:11 nvme.nvme_fio -- common/autotest_common.sh@1343 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:13:01.445 18:11:11 nvme.nvme_fio -- common/autotest_common.sh@1343 -- # local sanitizers 00:13:01.445 18:11:11 nvme.nvme_fio -- common/autotest_common.sh@1344 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme 00:13:01.445 18:11:11 nvme.nvme_fio -- common/autotest_common.sh@1345 -- # shift 00:13:01.445 18:11:11 nvme.nvme_fio -- common/autotest_common.sh@1347 -- # local asan_lib= 00:13:01.445 18:11:11 nvme.nvme_fio -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 00:13:01.445 18:11:11 nvme.nvme_fio -- common/autotest_common.sh@1349 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme 00:13:01.445 18:11:11 nvme.nvme_fio -- common/autotest_common.sh@1349 -- # grep libasan 00:13:01.445 18:11:11 nvme.nvme_fio -- common/autotest_common.sh@1349 -- # awk '{print $3}' 00:13:01.445 18:11:11 nvme.nvme_fio -- common/autotest_common.sh@1349 -- # asan_lib=/usr/lib64/libasan.so.8 00:13:01.445 18:11:11 nvme.nvme_fio -- common/autotest_common.sh@1350 -- # [[ -n /usr/lib64/libasan.so.8 ]] 00:13:01.445 18:11:11 nvme.nvme_fio -- common/autotest_common.sh@1351 -- # break 00:13:01.445 18:11:11 nvme.nvme_fio -- common/autotest_common.sh@1356 -- # LD_PRELOAD='/usr/lib64/libasan.so.8 /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme' 00:13:01.445 18:11:11 nvme.nvme_fio -- common/autotest_common.sh@1356 -- # /usr/src/fio/fio /home/vagrant/spdk_repo/spdk/app/fio/nvme/example_config.fio '--filename=trtype=PCIe traddr=0000.00.11.0' --bs=4096 00:13:01.445 test: (g=0): rw=randrw, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk, iodepth=128 00:13:01.445 fio-3.35 00:13:01.445 Starting 1 thread 00:13:04.755 00:13:04.755 test: (groupid=0, jobs=1): err= 0: pid=65577: Fri Dec 6 18:11:15 2024 00:13:04.755 read: IOPS=22.0k, BW=85.8MiB/s (90.0MB/s)(172MiB/2001msec) 00:13:04.755 slat (nsec): min=3906, max=64561, avg=4689.08, stdev=1185.74 00:13:04.755 clat (usec): min=311, max=10993, avg=2905.17, stdev=388.62 00:13:04.755 lat (usec): min=316, max=11058, avg=2909.86, stdev=389.18 00:13:04.755 clat percentiles (usec): 00:13:04.755 | 1.00th=[ 2573], 5.00th=[ 2671], 10.00th=[ 2737], 20.00th=[ 2769], 00:13:04.755 | 30.00th=[ 2802], 40.00th=[ 2835], 50.00th=[ 2868], 60.00th=[ 2868], 00:13:04.755 | 70.00th=[ 2900], 80.00th=[ 2933], 90.00th=[ 3032], 95.00th=[ 3294], 00:13:04.756 | 99.00th=[ 4424], 99.50th=[ 5342], 99.90th=[ 8586], 99.95th=[ 8848], 00:13:04.756 | 99.99th=[10683] 00:13:04.756 bw ( KiB/s): min=85664, max=87192, per=98.36%, avg=86405.33, stdev=765.01, samples=3 00:13:04.756 iops : min=21416, max=21798, avg=21601.33, stdev=191.25, samples=3 00:13:04.756 write: IOPS=21.8k, BW=85.2MiB/s (89.4MB/s)(171MiB/2001msec); 0 zone resets 00:13:04.756 slat (nsec): min=4042, max=50027, avg=4906.79, stdev=1137.70 00:13:04.756 clat (usec): min=195, max=10737, avg=2915.53, stdev=420.11 00:13:04.756 lat (usec): min=200, max=10756, avg=2920.44, stdev=420.62 00:13:04.756 clat percentiles (usec): 00:13:04.756 | 1.00th=[ 2573], 5.00th=[ 2671], 10.00th=[ 2737], 20.00th=[ 2769], 00:13:04.756 | 30.00th=[ 2802], 40.00th=[ 2835], 50.00th=[ 2868], 60.00th=[ 2868], 00:13:04.756 | 70.00th=[ 2900], 80.00th=[ 2966], 90.00th=[ 3032], 95.00th=[ 3326], 00:13:04.756 | 99.00th=[ 4490], 99.50th=[ 5407], 99.90th=[ 8717], 99.95th=[ 9110], 00:13:04.756 | 99.99th=[10552] 00:13:04.756 bw ( KiB/s): min=85648, max=87280, per=99.21%, avg=86600.00, stdev=849.32, samples=3 00:13:04.756 iops : min=21412, max=21820, avg=21650.00, stdev=212.33, samples=3 00:13:04.756 lat (usec) : 250=0.01%, 500=0.01%, 750=0.01%, 1000=0.01% 00:13:04.756 lat (msec) : 2=0.05%, 4=98.57%, 10=1.31%, 20=0.03% 00:13:04.756 cpu : usr=99.35%, sys=0.05%, ctx=3, majf=0, minf=609 00:13:04.756 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.1%, >=64=99.9% 00:13:04.756 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:13:04.756 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:13:04.756 issued rwts: total=43946,43665,0,0 short=0,0,0,0 dropped=0,0,0,0 00:13:04.756 latency : target=0, window=0, percentile=100.00%, depth=128 00:13:04.756 00:13:04.756 Run status group 0 (all jobs): 00:13:04.756 READ: bw=85.8MiB/s (90.0MB/s), 85.8MiB/s-85.8MiB/s (90.0MB/s-90.0MB/s), io=172MiB (180MB), run=2001-2001msec 00:13:04.756 WRITE: bw=85.2MiB/s (89.4MB/s), 85.2MiB/s-85.2MiB/s (89.4MB/s-89.4MB/s), io=171MiB (179MB), run=2001-2001msec 00:13:04.756 ----------------------------------------------------- 00:13:04.756 Suppressions used: 00:13:04.756 count bytes template 00:13:04.756 1 32 /usr/src/fio/parse.c 00:13:04.756 1 8 libtcmalloc_minimal.so 00:13:04.756 ----------------------------------------------------- 00:13:04.756 00:13:05.014 18:11:15 nvme.nvme_fio -- nvme/nvme.sh@44 -- # ran_fio=true 00:13:05.014 18:11:15 nvme.nvme_fio -- nvme/nvme.sh@34 -- # for bdf in "${bdfs[@]}" 00:13:05.014 18:11:15 nvme.nvme_fio -- nvme/nvme.sh@35 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_identify -r 'trtype:PCIe traddr:0000:00:12.0' 00:13:05.014 18:11:15 nvme.nvme_fio -- nvme/nvme.sh@35 -- # grep -qE '^Namespace ID:[0-9]+' 00:13:05.272 18:11:15 nvme.nvme_fio -- nvme/nvme.sh@38 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_identify -r 'trtype:PCIe traddr:0000:00:12.0' 00:13:05.272 18:11:15 nvme.nvme_fio -- nvme/nvme.sh@38 -- # grep -q 'Extended Data LBA' 00:13:05.531 18:11:15 nvme.nvme_fio -- nvme/nvme.sh@41 -- # bs=4096 00:13:05.531 18:11:15 nvme.nvme_fio -- nvme/nvme.sh@43 -- # fio_nvme /home/vagrant/spdk_repo/spdk/app/fio/nvme/example_config.fio '--filename=trtype=PCIe traddr=0000.00.12.0' --bs=4096 00:13:05.531 18:11:15 nvme.nvme_fio -- common/autotest_common.sh@1364 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme /home/vagrant/spdk_repo/spdk/app/fio/nvme/example_config.fio '--filename=trtype=PCIe traddr=0000.00.12.0' --bs=4096 00:13:05.531 18:11:15 nvme.nvme_fio -- common/autotest_common.sh@1341 -- # local fio_dir=/usr/src/fio 00:13:05.531 18:11:15 nvme.nvme_fio -- common/autotest_common.sh@1343 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:13:05.531 18:11:15 nvme.nvme_fio -- common/autotest_common.sh@1343 -- # local sanitizers 00:13:05.531 18:11:15 nvme.nvme_fio -- common/autotest_common.sh@1344 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme 00:13:05.531 18:11:15 nvme.nvme_fio -- common/autotest_common.sh@1345 -- # shift 00:13:05.531 18:11:15 nvme.nvme_fio -- common/autotest_common.sh@1347 -- # local asan_lib= 00:13:05.531 18:11:15 nvme.nvme_fio -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 00:13:05.531 18:11:15 nvme.nvme_fio -- common/autotest_common.sh@1349 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme 00:13:05.531 18:11:15 nvme.nvme_fio -- common/autotest_common.sh@1349 -- # grep libasan 00:13:05.531 18:11:15 nvme.nvme_fio -- common/autotest_common.sh@1349 -- # awk '{print $3}' 00:13:05.531 18:11:15 nvme.nvme_fio -- common/autotest_common.sh@1349 -- # asan_lib=/usr/lib64/libasan.so.8 00:13:05.531 18:11:15 nvme.nvme_fio -- common/autotest_common.sh@1350 -- # [[ -n /usr/lib64/libasan.so.8 ]] 00:13:05.531 18:11:15 nvme.nvme_fio -- common/autotest_common.sh@1351 -- # break 00:13:05.531 18:11:15 nvme.nvme_fio -- common/autotest_common.sh@1356 -- # LD_PRELOAD='/usr/lib64/libasan.so.8 /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme' 00:13:05.531 18:11:15 nvme.nvme_fio -- common/autotest_common.sh@1356 -- # /usr/src/fio/fio /home/vagrant/spdk_repo/spdk/app/fio/nvme/example_config.fio '--filename=trtype=PCIe traddr=0000.00.12.0' --bs=4096 00:13:05.531 test: (g=0): rw=randrw, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk, iodepth=128 00:13:05.531 fio-3.35 00:13:05.531 Starting 1 thread 00:13:09.716 00:13:09.716 test: (groupid=0, jobs=1): err= 0: pid=65638: Fri Dec 6 18:11:20 2024 00:13:09.716 read: IOPS=21.9k, BW=85.4MiB/s (89.5MB/s)(171MiB/2001msec) 00:13:09.716 slat (nsec): min=3817, max=66125, avg=4665.86, stdev=1088.93 00:13:09.716 clat (usec): min=231, max=10213, avg=2921.52, stdev=324.19 00:13:09.716 lat (usec): min=236, max=10279, avg=2926.18, stdev=324.59 00:13:09.716 clat percentiles (usec): 00:13:09.716 | 1.00th=[ 2573], 5.00th=[ 2704], 10.00th=[ 2737], 20.00th=[ 2769], 00:13:09.716 | 30.00th=[ 2802], 40.00th=[ 2835], 50.00th=[ 2868], 60.00th=[ 2900], 00:13:09.716 | 70.00th=[ 2933], 80.00th=[ 2966], 90.00th=[ 3064], 95.00th=[ 3523], 00:13:09.716 | 99.00th=[ 4015], 99.50th=[ 4359], 99.90th=[ 5604], 99.95th=[ 7898], 00:13:09.716 | 99.99th=[ 9896] 00:13:09.716 bw ( KiB/s): min=82800, max=89672, per=98.65%, avg=86221.33, stdev=3436.09, samples=3 00:13:09.716 iops : min=20700, max=22418, avg=21555.33, stdev=859.02, samples=3 00:13:09.716 write: IOPS=21.7k, BW=84.8MiB/s (88.9MB/s)(170MiB/2001msec); 0 zone resets 00:13:09.716 slat (nsec): min=3851, max=35870, avg=4906.93, stdev=1074.85 00:13:09.716 clat (usec): min=203, max=10002, avg=2929.66, stdev=331.42 00:13:09.716 lat (usec): min=208, max=10024, avg=2934.57, stdev=331.80 00:13:09.716 clat percentiles (usec): 00:13:09.716 | 1.00th=[ 2573], 5.00th=[ 2704], 10.00th=[ 2737], 20.00th=[ 2802], 00:13:09.716 | 30.00th=[ 2835], 40.00th=[ 2835], 50.00th=[ 2868], 60.00th=[ 2900], 00:13:09.716 | 70.00th=[ 2933], 80.00th=[ 2966], 90.00th=[ 3097], 95.00th=[ 3523], 00:13:09.716 | 99.00th=[ 4047], 99.50th=[ 4424], 99.90th=[ 6128], 99.95th=[ 8094], 00:13:09.716 | 99.99th=[ 9503] 00:13:09.716 bw ( KiB/s): min=82824, max=89368, per=99.50%, avg=86349.33, stdev=3301.29, samples=3 00:13:09.716 iops : min=20706, max=22342, avg=21587.33, stdev=825.32, samples=3 00:13:09.716 lat (usec) : 250=0.01%, 500=0.01%, 750=0.01%, 1000=0.02% 00:13:09.716 lat (msec) : 2=0.34%, 4=98.47%, 10=1.15%, 20=0.01% 00:13:09.716 cpu : usr=99.40%, sys=0.10%, ctx=2, majf=0, minf=608 00:13:09.716 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.1%, >=64=99.9% 00:13:09.716 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:13:09.716 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:13:09.716 issued rwts: total=43724,43415,0,0 short=0,0,0,0 dropped=0,0,0,0 00:13:09.716 latency : target=0, window=0, percentile=100.00%, depth=128 00:13:09.716 00:13:09.716 Run status group 0 (all jobs): 00:13:09.716 READ: bw=85.4MiB/s (89.5MB/s), 85.4MiB/s-85.4MiB/s (89.5MB/s-89.5MB/s), io=171MiB (179MB), run=2001-2001msec 00:13:09.716 WRITE: bw=84.8MiB/s (88.9MB/s), 84.8MiB/s-84.8MiB/s (88.9MB/s-88.9MB/s), io=170MiB (178MB), run=2001-2001msec 00:13:09.974 ----------------------------------------------------- 00:13:09.974 Suppressions used: 00:13:09.974 count bytes template 00:13:09.974 1 32 /usr/src/fio/parse.c 00:13:09.974 1 8 libtcmalloc_minimal.so 00:13:09.974 ----------------------------------------------------- 00:13:09.974 00:13:09.974 18:11:20 nvme.nvme_fio -- nvme/nvme.sh@44 -- # ran_fio=true 00:13:09.974 18:11:20 nvme.nvme_fio -- nvme/nvme.sh@34 -- # for bdf in "${bdfs[@]}" 00:13:09.974 18:11:20 nvme.nvme_fio -- nvme/nvme.sh@35 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_identify -r 'trtype:PCIe traddr:0000:00:13.0' 00:13:09.974 18:11:20 nvme.nvme_fio -- nvme/nvme.sh@35 -- # grep -qE '^Namespace ID:[0-9]+' 00:13:10.232 18:11:20 nvme.nvme_fio -- nvme/nvme.sh@38 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_identify -r 'trtype:PCIe traddr:0000:00:13.0' 00:13:10.232 18:11:20 nvme.nvme_fio -- nvme/nvme.sh@38 -- # grep -q 'Extended Data LBA' 00:13:10.490 18:11:20 nvme.nvme_fio -- nvme/nvme.sh@41 -- # bs=4096 00:13:10.490 18:11:20 nvme.nvme_fio -- nvme/nvme.sh@43 -- # fio_nvme /home/vagrant/spdk_repo/spdk/app/fio/nvme/example_config.fio '--filename=trtype=PCIe traddr=0000.00.13.0' --bs=4096 00:13:10.490 18:11:20 nvme.nvme_fio -- common/autotest_common.sh@1364 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme /home/vagrant/spdk_repo/spdk/app/fio/nvme/example_config.fio '--filename=trtype=PCIe traddr=0000.00.13.0' --bs=4096 00:13:10.490 18:11:20 nvme.nvme_fio -- common/autotest_common.sh@1341 -- # local fio_dir=/usr/src/fio 00:13:10.490 18:11:20 nvme.nvme_fio -- common/autotest_common.sh@1343 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:13:10.490 18:11:20 nvme.nvme_fio -- common/autotest_common.sh@1343 -- # local sanitizers 00:13:10.490 18:11:20 nvme.nvme_fio -- common/autotest_common.sh@1344 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme 00:13:10.490 18:11:20 nvme.nvme_fio -- common/autotest_common.sh@1345 -- # shift 00:13:10.490 18:11:20 nvme.nvme_fio -- common/autotest_common.sh@1347 -- # local asan_lib= 00:13:10.490 18:11:20 nvme.nvme_fio -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 00:13:10.490 18:11:20 nvme.nvme_fio -- common/autotest_common.sh@1349 -- # grep libasan 00:13:10.490 18:11:20 nvme.nvme_fio -- common/autotest_common.sh@1349 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme 00:13:10.490 18:11:20 nvme.nvme_fio -- common/autotest_common.sh@1349 -- # awk '{print $3}' 00:13:10.490 18:11:20 nvme.nvme_fio -- common/autotest_common.sh@1349 -- # asan_lib=/usr/lib64/libasan.so.8 00:13:10.490 18:11:20 nvme.nvme_fio -- common/autotest_common.sh@1350 -- # [[ -n /usr/lib64/libasan.so.8 ]] 00:13:10.490 18:11:20 nvme.nvme_fio -- common/autotest_common.sh@1351 -- # break 00:13:10.490 18:11:20 nvme.nvme_fio -- common/autotest_common.sh@1356 -- # LD_PRELOAD='/usr/lib64/libasan.so.8 /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme' 00:13:10.490 18:11:20 nvme.nvme_fio -- common/autotest_common.sh@1356 -- # /usr/src/fio/fio /home/vagrant/spdk_repo/spdk/app/fio/nvme/example_config.fio '--filename=trtype=PCIe traddr=0000.00.13.0' --bs=4096 00:13:10.750 test: (g=0): rw=randrw, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk, iodepth=128 00:13:10.750 fio-3.35 00:13:10.750 Starting 1 thread 00:13:17.355 00:13:17.355 test: (groupid=0, jobs=1): err= 0: pid=65704: Fri Dec 6 18:11:27 2024 00:13:17.355 read: IOPS=22.5k, BW=87.8MiB/s (92.1MB/s)(176MiB/2001msec) 00:13:17.355 slat (nsec): min=3874, max=89955, avg=4596.70, stdev=1249.79 00:13:17.355 clat (usec): min=224, max=10592, avg=2842.01, stdev=468.77 00:13:17.355 lat (usec): min=229, max=10682, avg=2846.61, stdev=469.48 00:13:17.355 clat percentiles (usec): 00:13:17.355 | 1.00th=[ 2540], 5.00th=[ 2606], 10.00th=[ 2638], 20.00th=[ 2704], 00:13:17.355 | 30.00th=[ 2737], 40.00th=[ 2769], 50.00th=[ 2802], 60.00th=[ 2802], 00:13:17.355 | 70.00th=[ 2835], 80.00th=[ 2868], 90.00th=[ 2933], 95.00th=[ 2999], 00:13:17.355 | 99.00th=[ 5342], 99.50th=[ 6652], 99.90th=[ 8586], 99.95th=[ 8586], 00:13:17.355 | 99.99th=[10290] 00:13:17.355 bw ( KiB/s): min=86427, max=91560, per=99.20%, avg=89203.67, stdev=2592.19, samples=3 00:13:17.355 iops : min=21606, max=22890, avg=22301.33, stdev=648.61, samples=3 00:13:17.355 write: IOPS=22.3k, BW=87.3MiB/s (91.5MB/s)(175MiB/2001msec); 0 zone resets 00:13:17.355 slat (nsec): min=4018, max=34630, avg=4869.31, stdev=1127.51 00:13:17.355 clat (usec): min=199, max=10380, avg=2842.85, stdev=458.11 00:13:17.355 lat (usec): min=204, max=10401, avg=2847.72, stdev=458.70 00:13:17.355 clat percentiles (usec): 00:13:17.355 | 1.00th=[ 2540], 5.00th=[ 2606], 10.00th=[ 2671], 20.00th=[ 2704], 00:13:17.355 | 30.00th=[ 2737], 40.00th=[ 2769], 50.00th=[ 2802], 60.00th=[ 2802], 00:13:17.355 | 70.00th=[ 2835], 80.00th=[ 2868], 90.00th=[ 2933], 95.00th=[ 2999], 00:13:17.355 | 99.00th=[ 5014], 99.50th=[ 6652], 99.90th=[ 8586], 99.95th=[ 8586], 00:13:17.355 | 99.99th=[ 9896] 00:13:17.355 bw ( KiB/s): min=86091, max=92584, per=99.99%, avg=89374.33, stdev=3247.13, samples=3 00:13:17.355 iops : min=21522, max=23146, avg=22343.33, stdev=812.16, samples=3 00:13:17.355 lat (usec) : 250=0.01%, 500=0.01%, 750=0.01%, 1000=0.02% 00:13:17.356 lat (msec) : 2=0.05%, 4=98.49%, 10=1.41%, 20=0.01% 00:13:17.356 cpu : usr=99.40%, sys=0.05%, ctx=5, majf=0, minf=606 00:13:17.356 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.1%, >=64=99.9% 00:13:17.356 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:13:17.356 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:13:17.356 issued rwts: total=44982,44713,0,0 short=0,0,0,0 dropped=0,0,0,0 00:13:17.356 latency : target=0, window=0, percentile=100.00%, depth=128 00:13:17.356 00:13:17.356 Run status group 0 (all jobs): 00:13:17.356 READ: bw=87.8MiB/s (92.1MB/s), 87.8MiB/s-87.8MiB/s (92.1MB/s-92.1MB/s), io=176MiB (184MB), run=2001-2001msec 00:13:17.356 WRITE: bw=87.3MiB/s (91.5MB/s), 87.3MiB/s-87.3MiB/s (91.5MB/s-91.5MB/s), io=175MiB (183MB), run=2001-2001msec 00:13:17.356 ----------------------------------------------------- 00:13:17.356 Suppressions used: 00:13:17.356 count bytes template 00:13:17.356 1 32 /usr/src/fio/parse.c 00:13:17.356 1 8 libtcmalloc_minimal.so 00:13:17.356 ----------------------------------------------------- 00:13:17.356 00:13:17.356 18:11:27 nvme.nvme_fio -- nvme/nvme.sh@44 -- # ran_fio=true 00:13:17.356 18:11:27 nvme.nvme_fio -- nvme/nvme.sh@46 -- # true 00:13:17.356 00:13:17.356 real 0m20.795s 00:13:17.356 user 0m14.812s 00:13:17.356 sys 0m8.091s 00:13:17.356 18:11:27 nvme.nvme_fio -- common/autotest_common.sh@1130 -- # xtrace_disable 00:13:17.356 18:11:27 nvme.nvme_fio -- common/autotest_common.sh@10 -- # set +x 00:13:17.356 ************************************ 00:13:17.356 END TEST nvme_fio 00:13:17.356 ************************************ 00:13:17.356 00:13:17.356 real 1m36.020s 00:13:17.356 user 3m43.269s 00:13:17.356 sys 0m27.151s 00:13:17.356 18:11:27 nvme -- common/autotest_common.sh@1130 -- # xtrace_disable 00:13:17.356 18:11:27 nvme -- common/autotest_common.sh@10 -- # set +x 00:13:17.356 ************************************ 00:13:17.356 END TEST nvme 00:13:17.356 ************************************ 00:13:17.356 18:11:27 -- spdk/autotest.sh@213 -- # [[ 0 -eq 1 ]] 00:13:17.356 18:11:27 -- spdk/autotest.sh@217 -- # run_test nvme_scc /home/vagrant/spdk_repo/spdk/test/nvme/nvme_scc.sh 00:13:17.356 18:11:27 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:13:17.356 18:11:27 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:13:17.356 18:11:27 -- common/autotest_common.sh@10 -- # set +x 00:13:17.356 ************************************ 00:13:17.356 START TEST nvme_scc 00:13:17.356 ************************************ 00:13:17.356 18:11:27 nvme_scc -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/nvme/nvme_scc.sh 00:13:17.356 * Looking for test storage... 00:13:17.356 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvme 00:13:17.356 18:11:27 nvme_scc -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:13:17.356 18:11:27 nvme_scc -- common/autotest_common.sh@1711 -- # lcov --version 00:13:17.356 18:11:27 nvme_scc -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:13:17.356 18:11:27 nvme_scc -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:13:17.356 18:11:27 nvme_scc -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:13:17.356 18:11:27 nvme_scc -- scripts/common.sh@333 -- # local ver1 ver1_l 00:13:17.356 18:11:27 nvme_scc -- scripts/common.sh@334 -- # local ver2 ver2_l 00:13:17.356 18:11:27 nvme_scc -- scripts/common.sh@336 -- # IFS=.-: 00:13:17.356 18:11:27 nvme_scc -- scripts/common.sh@336 -- # read -ra ver1 00:13:17.356 18:11:27 nvme_scc -- scripts/common.sh@337 -- # IFS=.-: 00:13:17.356 18:11:27 nvme_scc -- scripts/common.sh@337 -- # read -ra ver2 00:13:17.356 18:11:27 nvme_scc -- scripts/common.sh@338 -- # local 'op=<' 00:13:17.356 18:11:27 nvme_scc -- scripts/common.sh@340 -- # ver1_l=2 00:13:17.356 18:11:27 nvme_scc -- scripts/common.sh@341 -- # ver2_l=1 00:13:17.356 18:11:27 nvme_scc -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:13:17.356 18:11:27 nvme_scc -- scripts/common.sh@344 -- # case "$op" in 00:13:17.356 18:11:27 nvme_scc -- scripts/common.sh@345 -- # : 1 00:13:17.356 18:11:27 nvme_scc -- scripts/common.sh@364 -- # (( v = 0 )) 00:13:17.356 18:11:27 nvme_scc -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:13:17.356 18:11:27 nvme_scc -- scripts/common.sh@365 -- # decimal 1 00:13:17.356 18:11:27 nvme_scc -- scripts/common.sh@353 -- # local d=1 00:13:17.356 18:11:27 nvme_scc -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:13:17.356 18:11:27 nvme_scc -- scripts/common.sh@355 -- # echo 1 00:13:17.356 18:11:27 nvme_scc -- scripts/common.sh@365 -- # ver1[v]=1 00:13:17.356 18:11:27 nvme_scc -- scripts/common.sh@366 -- # decimal 2 00:13:17.356 18:11:27 nvme_scc -- scripts/common.sh@353 -- # local d=2 00:13:17.356 18:11:27 nvme_scc -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:13:17.356 18:11:27 nvme_scc -- scripts/common.sh@355 -- # echo 2 00:13:17.356 18:11:27 nvme_scc -- scripts/common.sh@366 -- # ver2[v]=2 00:13:17.356 18:11:27 nvme_scc -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:13:17.356 18:11:27 nvme_scc -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:13:17.356 18:11:27 nvme_scc -- scripts/common.sh@368 -- # return 0 00:13:17.356 18:11:27 nvme_scc -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:13:17.356 18:11:27 nvme_scc -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:13:17.356 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:13:17.356 --rc genhtml_branch_coverage=1 00:13:17.356 --rc genhtml_function_coverage=1 00:13:17.356 --rc genhtml_legend=1 00:13:17.356 --rc geninfo_all_blocks=1 00:13:17.356 --rc geninfo_unexecuted_blocks=1 00:13:17.356 00:13:17.356 ' 00:13:17.356 18:11:27 nvme_scc -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:13:17.356 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:13:17.356 --rc genhtml_branch_coverage=1 00:13:17.356 --rc genhtml_function_coverage=1 00:13:17.356 --rc genhtml_legend=1 00:13:17.356 --rc geninfo_all_blocks=1 00:13:17.356 --rc geninfo_unexecuted_blocks=1 00:13:17.356 00:13:17.356 ' 00:13:17.356 18:11:27 nvme_scc -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:13:17.356 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:13:17.356 --rc genhtml_branch_coverage=1 00:13:17.356 --rc genhtml_function_coverage=1 00:13:17.356 --rc genhtml_legend=1 00:13:17.356 --rc geninfo_all_blocks=1 00:13:17.356 --rc geninfo_unexecuted_blocks=1 00:13:17.356 00:13:17.356 ' 00:13:17.356 18:11:27 nvme_scc -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:13:17.356 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:13:17.356 --rc genhtml_branch_coverage=1 00:13:17.356 --rc genhtml_function_coverage=1 00:13:17.356 --rc genhtml_legend=1 00:13:17.356 --rc geninfo_all_blocks=1 00:13:17.356 --rc geninfo_unexecuted_blocks=1 00:13:17.356 00:13:17.356 ' 00:13:17.356 18:11:27 nvme_scc -- cuse/common.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/common/nvme/functions.sh 00:13:17.356 18:11:27 nvme_scc -- nvme/functions.sh@7 -- # dirname /home/vagrant/spdk_repo/spdk/test/common/nvme/functions.sh 00:13:17.356 18:11:27 nvme_scc -- nvme/functions.sh@7 -- # readlink -f /home/vagrant/spdk_repo/spdk/test/common/nvme/../../../ 00:13:17.356 18:11:27 nvme_scc -- nvme/functions.sh@7 -- # rootdir=/home/vagrant/spdk_repo/spdk 00:13:17.356 18:11:27 nvme_scc -- nvme/functions.sh@8 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:13:17.356 18:11:27 nvme_scc -- scripts/common.sh@15 -- # shopt -s extglob 00:13:17.356 18:11:27 nvme_scc -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:13:17.356 18:11:27 nvme_scc -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:13:17.356 18:11:27 nvme_scc -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:13:17.356 18:11:27 nvme_scc -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:17.356 18:11:27 nvme_scc -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:17.356 18:11:27 nvme_scc -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:17.356 18:11:27 nvme_scc -- paths/export.sh@5 -- # export PATH 00:13:17.356 18:11:27 nvme_scc -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:17.356 18:11:27 nvme_scc -- nvme/functions.sh@10 -- # ctrls=() 00:13:17.356 18:11:27 nvme_scc -- nvme/functions.sh@10 -- # declare -A ctrls 00:13:17.356 18:11:27 nvme_scc -- nvme/functions.sh@11 -- # nvmes=() 00:13:17.356 18:11:27 nvme_scc -- nvme/functions.sh@11 -- # declare -A nvmes 00:13:17.356 18:11:27 nvme_scc -- nvme/functions.sh@12 -- # bdfs=() 00:13:17.356 18:11:27 nvme_scc -- nvme/functions.sh@12 -- # declare -A bdfs 00:13:17.356 18:11:27 nvme_scc -- nvme/functions.sh@13 -- # ordered_ctrls=() 00:13:17.356 18:11:27 nvme_scc -- nvme/functions.sh@13 -- # declare -a ordered_ctrls 00:13:17.356 18:11:27 nvme_scc -- nvme/functions.sh@14 -- # nvme_name= 00:13:17.356 18:11:27 nvme_scc -- cuse/common.sh@11 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:13:17.356 18:11:27 nvme_scc -- nvme/nvme_scc.sh@12 -- # uname 00:13:17.356 18:11:27 nvme_scc -- nvme/nvme_scc.sh@12 -- # [[ Linux == Linux ]] 00:13:17.356 18:11:27 nvme_scc -- nvme/nvme_scc.sh@12 -- # [[ QEMU == QEMU ]] 00:13:17.356 18:11:27 nvme_scc -- nvme/nvme_scc.sh@14 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:13:17.924 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:13:18.183 Waiting for block devices as requested 00:13:18.183 0000:00:11.0 (1b36 0010): uio_pci_generic -> nvme 00:13:18.183 0000:00:10.0 (1b36 0010): uio_pci_generic -> nvme 00:13:18.443 0000:00:12.0 (1b36 0010): uio_pci_generic -> nvme 00:13:18.443 0000:00:13.0 (1b36 0010): uio_pci_generic -> nvme 00:13:23.724 * Events for some block/disk devices (0000:00:13.0) were not caught, they may be missing 00:13:23.724 18:11:34 nvme_scc -- nvme/nvme_scc.sh@16 -- # scan_nvme_ctrls 00:13:23.724 18:11:34 nvme_scc -- nvme/functions.sh@45 -- # local ctrl ctrl_dev reg val ns pci 00:13:23.724 18:11:34 nvme_scc -- nvme/functions.sh@47 -- # for ctrl in /sys/class/nvme/nvme* 00:13:23.724 18:11:34 nvme_scc -- nvme/functions.sh@48 -- # [[ -e /sys/class/nvme/nvme0 ]] 00:13:23.724 18:11:34 nvme_scc -- nvme/functions.sh@49 -- # pci=0000:00:11.0 00:13:23.724 18:11:34 nvme_scc -- nvme/functions.sh@50 -- # pci_can_use 0000:00:11.0 00:13:23.725 18:11:34 nvme_scc -- scripts/common.sh@18 -- # local i 00:13:23.725 18:11:34 nvme_scc -- scripts/common.sh@21 -- # [[ =~ 0000:00:11.0 ]] 00:13:23.725 18:11:34 nvme_scc -- scripts/common.sh@25 -- # [[ -z '' ]] 00:13:23.725 18:11:34 nvme_scc -- scripts/common.sh@27 -- # return 0 00:13:23.725 18:11:34 nvme_scc -- nvme/functions.sh@51 -- # ctrl_dev=nvme0 00:13:23.725 18:11:34 nvme_scc -- nvme/functions.sh@52 -- # nvme_get nvme0 id-ctrl /dev/nvme0 00:13:23.725 18:11:34 nvme_scc -- nvme/functions.sh@17 -- # local ref=nvme0 reg val 00:13:23.725 18:11:34 nvme_scc -- nvme/functions.sh@18 -- # shift 00:13:23.725 18:11:34 nvme_scc -- nvme/functions.sh@20 -- # local -gA 'nvme0=()' 00:13:23.725 18:11:34 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:23.725 18:11:34 nvme_scc -- nvme/functions.sh@16 -- # /usr/local/src/nvme-cli/nvme id-ctrl /dev/nvme0 00:13:23.725 18:11:34 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:23.725 18:11:34 nvme_scc -- nvme/functions.sh@22 -- # [[ -n '' ]] 00:13:23.725 18:11:34 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:23.725 18:11:34 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:23.725 18:11:34 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x1b36 ]] 00:13:23.725 18:11:34 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[vid]="0x1b36"' 00:13:23.725 18:11:34 nvme_scc -- nvme/functions.sh@23 -- # nvme0[vid]=0x1b36 00:13:23.725 18:11:34 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:23.725 18:11:34 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:23.725 18:11:34 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x1af4 ]] 00:13:23.725 18:11:34 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[ssvid]="0x1af4"' 00:13:23.725 18:11:34 nvme_scc -- nvme/functions.sh@23 -- # nvme0[ssvid]=0x1af4 00:13:23.725 18:11:34 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:23.725 18:11:34 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:23.725 18:11:34 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 12341 ]] 00:13:23.725 18:11:34 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[sn]="12341 "' 00:13:23.725 18:11:34 nvme_scc -- nvme/functions.sh@23 -- # nvme0[sn]='12341 ' 00:13:23.725 18:11:34 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:23.725 18:11:34 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:23.725 18:11:34 nvme_scc -- nvme/functions.sh@22 -- # [[ -n QEMU NVMe Ctrl ]] 00:13:23.725 18:11:34 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[mn]="QEMU NVMe Ctrl "' 00:13:23.725 18:11:34 nvme_scc -- nvme/functions.sh@23 -- # nvme0[mn]='QEMU NVMe Ctrl ' 00:13:23.725 18:11:34 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:23.725 18:11:34 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:23.725 18:11:34 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 8.0.0 ]] 00:13:23.725 18:11:34 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[fr]="8.0.0 "' 00:13:23.725 18:11:34 nvme_scc -- nvme/functions.sh@23 -- # nvme0[fr]='8.0.0 ' 00:13:23.725 18:11:34 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:23.725 18:11:34 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:23.725 18:11:34 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 6 ]] 00:13:23.725 18:11:34 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[rab]="6"' 00:13:23.725 18:11:34 nvme_scc -- nvme/functions.sh@23 -- # nvme0[rab]=6 00:13:23.725 18:11:34 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:23.725 18:11:34 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:23.725 18:11:34 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 525400 ]] 00:13:23.725 18:11:34 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[ieee]="525400"' 00:13:23.725 18:11:34 nvme_scc -- nvme/functions.sh@23 -- # nvme0[ieee]=525400 00:13:23.725 18:11:34 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:23.725 18:11:34 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:23.725 18:11:34 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:23.725 18:11:34 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[cmic]="0"' 00:13:23.725 18:11:34 nvme_scc -- nvme/functions.sh@23 -- # nvme0[cmic]=0 00:13:23.725 18:11:34 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:23.725 18:11:34 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:23.725 18:11:34 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 7 ]] 00:13:23.725 18:11:34 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[mdts]="7"' 00:13:23.725 18:11:34 nvme_scc -- nvme/functions.sh@23 -- # nvme0[mdts]=7 00:13:23.725 18:11:34 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:23.725 18:11:34 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:23.725 18:11:34 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:23.725 18:11:34 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[cntlid]="0"' 00:13:23.725 18:11:34 nvme_scc -- nvme/functions.sh@23 -- # nvme0[cntlid]=0 00:13:23.725 18:11:34 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:23.725 18:11:34 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:23.725 18:11:34 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x10400 ]] 00:13:23.725 18:11:34 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[ver]="0x10400"' 00:13:23.725 18:11:34 nvme_scc -- nvme/functions.sh@23 -- # nvme0[ver]=0x10400 00:13:23.725 18:11:34 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:23.725 18:11:34 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:23.725 18:11:34 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:23.725 18:11:34 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[rtd3r]="0"' 00:13:23.725 18:11:34 nvme_scc -- nvme/functions.sh@23 -- # nvme0[rtd3r]=0 00:13:23.725 18:11:34 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:23.725 18:11:34 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:23.725 18:11:34 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:23.725 18:11:34 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[rtd3e]="0"' 00:13:23.725 18:11:34 nvme_scc -- nvme/functions.sh@23 -- # nvme0[rtd3e]=0 00:13:23.725 18:11:34 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:23.725 18:11:34 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:23.725 18:11:34 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x100 ]] 00:13:23.725 18:11:34 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[oaes]="0x100"' 00:13:23.725 18:11:34 nvme_scc -- nvme/functions.sh@23 -- # nvme0[oaes]=0x100 00:13:23.725 18:11:34 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:23.725 18:11:34 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:23.725 18:11:34 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x8000 ]] 00:13:23.725 18:11:34 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[ctratt]="0x8000"' 00:13:23.725 18:11:34 nvme_scc -- nvme/functions.sh@23 -- # nvme0[ctratt]=0x8000 00:13:23.725 18:11:34 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:23.725 18:11:34 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:23.725 18:11:34 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:23.725 18:11:34 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[rrls]="0"' 00:13:23.725 18:11:34 nvme_scc -- nvme/functions.sh@23 -- # nvme0[rrls]=0 00:13:23.725 18:11:34 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:23.725 18:11:34 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:23.725 18:11:34 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:13:23.725 18:11:34 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[cntrltype]="1"' 00:13:23.725 18:11:34 nvme_scc -- nvme/functions.sh@23 -- # nvme0[cntrltype]=1 00:13:23.725 18:11:34 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:23.725 18:11:34 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:23.725 18:11:34 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 00000000-0000-0000-0000-000000000000 ]] 00:13:23.725 18:11:34 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[fguid]="00000000-0000-0000-0000-000000000000"' 00:13:23.725 18:11:34 nvme_scc -- nvme/functions.sh@23 -- # nvme0[fguid]=00000000-0000-0000-0000-000000000000 00:13:23.725 18:11:34 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:23.725 18:11:34 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:23.725 18:11:34 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:23.725 18:11:34 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[crdt1]="0"' 00:13:23.725 18:11:34 nvme_scc -- nvme/functions.sh@23 -- # nvme0[crdt1]=0 00:13:23.725 18:11:34 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:23.725 18:11:34 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:23.725 18:11:34 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:23.725 18:11:34 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[crdt2]="0"' 00:13:23.725 18:11:34 nvme_scc -- nvme/functions.sh@23 -- # nvme0[crdt2]=0 00:13:23.725 18:11:34 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:23.725 18:11:34 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:23.725 18:11:34 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:23.725 18:11:34 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[crdt3]="0"' 00:13:23.725 18:11:34 nvme_scc -- nvme/functions.sh@23 -- # nvme0[crdt3]=0 00:13:23.726 18:11:34 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:23.726 18:11:34 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:23.726 18:11:34 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:23.726 18:11:34 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[nvmsr]="0"' 00:13:23.726 18:11:34 nvme_scc -- nvme/functions.sh@23 -- # nvme0[nvmsr]=0 00:13:23.726 18:11:34 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:23.726 18:11:34 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:23.726 18:11:34 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:23.726 18:11:34 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[vwci]="0"' 00:13:23.726 18:11:34 nvme_scc -- nvme/functions.sh@23 -- # nvme0[vwci]=0 00:13:23.726 18:11:34 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:23.726 18:11:34 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:23.726 18:11:34 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:23.726 18:11:34 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[mec]="0"' 00:13:23.726 18:11:34 nvme_scc -- nvme/functions.sh@23 -- # nvme0[mec]=0 00:13:23.726 18:11:34 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:23.726 18:11:34 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:23.726 18:11:34 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x12a ]] 00:13:23.726 18:11:34 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[oacs]="0x12a"' 00:13:23.726 18:11:34 nvme_scc -- nvme/functions.sh@23 -- # nvme0[oacs]=0x12a 00:13:23.726 18:11:34 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:23.726 18:11:34 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:23.726 18:11:34 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 3 ]] 00:13:23.726 18:11:34 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[acl]="3"' 00:13:23.726 18:11:34 nvme_scc -- nvme/functions.sh@23 -- # nvme0[acl]=3 00:13:23.726 18:11:34 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:23.726 18:11:34 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:23.726 18:11:34 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 3 ]] 00:13:23.726 18:11:34 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[aerl]="3"' 00:13:23.726 18:11:34 nvme_scc -- nvme/functions.sh@23 -- # nvme0[aerl]=3 00:13:23.726 18:11:34 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:23.726 18:11:34 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:23.726 18:11:34 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:13:23.726 18:11:34 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[frmw]="0x3"' 00:13:23.726 18:11:34 nvme_scc -- nvme/functions.sh@23 -- # nvme0[frmw]=0x3 00:13:23.726 18:11:34 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:23.726 18:11:34 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:23.726 18:11:34 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x7 ]] 00:13:23.726 18:11:34 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[lpa]="0x7"' 00:13:23.726 18:11:34 nvme_scc -- nvme/functions.sh@23 -- # nvme0[lpa]=0x7 00:13:23.726 18:11:34 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:23.726 18:11:34 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:23.726 18:11:34 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:23.726 18:11:34 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[elpe]="0"' 00:13:23.726 18:11:34 nvme_scc -- nvme/functions.sh@23 -- # nvme0[elpe]=0 00:13:23.726 18:11:34 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:23.726 18:11:34 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:23.726 18:11:34 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:23.726 18:11:34 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[npss]="0"' 00:13:23.726 18:11:34 nvme_scc -- nvme/functions.sh@23 -- # nvme0[npss]=0 00:13:23.726 18:11:34 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:23.726 18:11:34 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:23.726 18:11:34 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:23.726 18:11:34 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[avscc]="0"' 00:13:23.726 18:11:34 nvme_scc -- nvme/functions.sh@23 -- # nvme0[avscc]=0 00:13:23.726 18:11:34 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:23.726 18:11:34 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:23.726 18:11:34 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:23.726 18:11:34 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[apsta]="0"' 00:13:23.726 18:11:34 nvme_scc -- nvme/functions.sh@23 -- # nvme0[apsta]=0 00:13:23.726 18:11:34 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:23.726 18:11:34 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:23.726 18:11:34 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 343 ]] 00:13:23.726 18:11:34 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[wctemp]="343"' 00:13:23.726 18:11:34 nvme_scc -- nvme/functions.sh@23 -- # nvme0[wctemp]=343 00:13:23.726 18:11:34 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:23.726 18:11:34 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:23.726 18:11:34 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 373 ]] 00:13:23.726 18:11:34 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[cctemp]="373"' 00:13:23.726 18:11:34 nvme_scc -- nvme/functions.sh@23 -- # nvme0[cctemp]=373 00:13:23.726 18:11:34 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:23.726 18:11:34 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:23.726 18:11:34 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:23.726 18:11:34 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[mtfa]="0"' 00:13:23.726 18:11:34 nvme_scc -- nvme/functions.sh@23 -- # nvme0[mtfa]=0 00:13:23.726 18:11:34 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:23.726 18:11:34 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:23.726 18:11:34 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:23.726 18:11:34 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[hmpre]="0"' 00:13:23.726 18:11:34 nvme_scc -- nvme/functions.sh@23 -- # nvme0[hmpre]=0 00:13:23.726 18:11:34 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:23.726 18:11:34 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:23.726 18:11:34 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:23.726 18:11:34 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[hmmin]="0"' 00:13:23.726 18:11:34 nvme_scc -- nvme/functions.sh@23 -- # nvme0[hmmin]=0 00:13:23.726 18:11:34 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:23.726 18:11:34 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:23.726 18:11:34 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:23.726 18:11:34 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[tnvmcap]="0"' 00:13:23.726 18:11:34 nvme_scc -- nvme/functions.sh@23 -- # nvme0[tnvmcap]=0 00:13:23.726 18:11:34 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:23.726 18:11:34 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:23.726 18:11:34 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:23.726 18:11:34 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[unvmcap]="0"' 00:13:23.726 18:11:34 nvme_scc -- nvme/functions.sh@23 -- # nvme0[unvmcap]=0 00:13:23.726 18:11:34 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:23.726 18:11:34 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:23.726 18:11:34 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:23.726 18:11:34 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[rpmbs]="0"' 00:13:23.726 18:11:34 nvme_scc -- nvme/functions.sh@23 -- # nvme0[rpmbs]=0 00:13:23.726 18:11:34 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:23.726 18:11:34 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:23.726 18:11:34 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:23.726 18:11:34 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[edstt]="0"' 00:13:23.726 18:11:34 nvme_scc -- nvme/functions.sh@23 -- # nvme0[edstt]=0 00:13:23.726 18:11:34 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:23.726 18:11:34 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:23.726 18:11:34 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:23.726 18:11:34 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[dsto]="0"' 00:13:23.726 18:11:34 nvme_scc -- nvme/functions.sh@23 -- # nvme0[dsto]=0 00:13:23.726 18:11:34 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:23.726 18:11:34 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:23.726 18:11:34 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:23.726 18:11:34 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[fwug]="0"' 00:13:23.726 18:11:34 nvme_scc -- nvme/functions.sh@23 -- # nvme0[fwug]=0 00:13:23.726 18:11:34 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:23.726 18:11:34 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:23.726 18:11:34 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:23.726 18:11:34 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[kas]="0"' 00:13:23.726 18:11:34 nvme_scc -- nvme/functions.sh@23 -- # nvme0[kas]=0 00:13:23.726 18:11:34 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:23.726 18:11:34 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:23.726 18:11:34 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:23.726 18:11:34 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[hctma]="0"' 00:13:23.726 18:11:34 nvme_scc -- nvme/functions.sh@23 -- # nvme0[hctma]=0 00:13:23.726 18:11:34 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:23.726 18:11:34 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:23.726 18:11:34 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:23.726 18:11:34 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[mntmt]="0"' 00:13:23.726 18:11:34 nvme_scc -- nvme/functions.sh@23 -- # nvme0[mntmt]=0 00:13:23.726 18:11:34 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:23.726 18:11:34 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:23.726 18:11:34 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:23.726 18:11:34 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[mxtmt]="0"' 00:13:23.726 18:11:34 nvme_scc -- nvme/functions.sh@23 -- # nvme0[mxtmt]=0 00:13:23.726 18:11:34 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:23.726 18:11:34 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:23.726 18:11:34 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:23.726 18:11:34 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[sanicap]="0"' 00:13:23.726 18:11:34 nvme_scc -- nvme/functions.sh@23 -- # nvme0[sanicap]=0 00:13:23.726 18:11:34 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:23.726 18:11:34 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:23.726 18:11:34 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:23.726 18:11:34 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[hmminds]="0"' 00:13:23.727 18:11:34 nvme_scc -- nvme/functions.sh@23 -- # nvme0[hmminds]=0 00:13:23.727 18:11:34 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:23.727 18:11:34 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:23.727 18:11:34 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:23.727 18:11:34 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[hmmaxd]="0"' 00:13:23.727 18:11:34 nvme_scc -- nvme/functions.sh@23 -- # nvme0[hmmaxd]=0 00:13:23.727 18:11:34 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:23.727 18:11:34 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:23.727 18:11:34 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:23.727 18:11:34 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[nsetidmax]="0"' 00:13:23.727 18:11:34 nvme_scc -- nvme/functions.sh@23 -- # nvme0[nsetidmax]=0 00:13:23.727 18:11:34 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:23.727 18:11:34 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:23.727 18:11:34 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:23.727 18:11:34 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[endgidmax]="0"' 00:13:23.727 18:11:34 nvme_scc -- nvme/functions.sh@23 -- # nvme0[endgidmax]=0 00:13:23.727 18:11:34 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:23.727 18:11:34 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:23.727 18:11:34 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:23.727 18:11:34 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[anatt]="0"' 00:13:23.727 18:11:34 nvme_scc -- nvme/functions.sh@23 -- # nvme0[anatt]=0 00:13:23.727 18:11:34 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:23.727 18:11:34 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:23.727 18:11:34 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:23.727 18:11:34 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[anacap]="0"' 00:13:23.727 18:11:34 nvme_scc -- nvme/functions.sh@23 -- # nvme0[anacap]=0 00:13:23.727 18:11:34 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:23.727 18:11:34 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:23.727 18:11:34 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:23.727 18:11:34 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[anagrpmax]="0"' 00:13:23.727 18:11:34 nvme_scc -- nvme/functions.sh@23 -- # nvme0[anagrpmax]=0 00:13:23.727 18:11:34 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:23.727 18:11:34 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:23.727 18:11:34 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:23.727 18:11:34 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[nanagrpid]="0"' 00:13:23.727 18:11:34 nvme_scc -- nvme/functions.sh@23 -- # nvme0[nanagrpid]=0 00:13:23.727 18:11:34 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:23.727 18:11:34 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:23.727 18:11:34 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:23.727 18:11:34 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[pels]="0"' 00:13:23.727 18:11:34 nvme_scc -- nvme/functions.sh@23 -- # nvme0[pels]=0 00:13:23.727 18:11:34 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:23.727 18:11:34 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:23.727 18:11:34 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:23.727 18:11:34 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[domainid]="0"' 00:13:23.727 18:11:34 nvme_scc -- nvme/functions.sh@23 -- # nvme0[domainid]=0 00:13:23.727 18:11:34 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:23.727 18:11:34 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:23.727 18:11:34 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:23.727 18:11:34 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[megcap]="0"' 00:13:23.727 18:11:34 nvme_scc -- nvme/functions.sh@23 -- # nvme0[megcap]=0 00:13:23.727 18:11:34 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:23.727 18:11:34 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:23.727 18:11:34 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x66 ]] 00:13:23.727 18:11:34 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[sqes]="0x66"' 00:13:23.727 18:11:34 nvme_scc -- nvme/functions.sh@23 -- # nvme0[sqes]=0x66 00:13:23.727 18:11:34 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:23.727 18:11:34 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:23.727 18:11:34 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x44 ]] 00:13:23.727 18:11:34 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[cqes]="0x44"' 00:13:23.727 18:11:34 nvme_scc -- nvme/functions.sh@23 -- # nvme0[cqes]=0x44 00:13:23.727 18:11:34 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:23.727 18:11:34 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:23.727 18:11:34 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:23.727 18:11:34 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[maxcmd]="0"' 00:13:23.727 18:11:34 nvme_scc -- nvme/functions.sh@23 -- # nvme0[maxcmd]=0 00:13:23.727 18:11:34 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:23.727 18:11:34 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:23.727 18:11:34 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 256 ]] 00:13:23.727 18:11:34 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[nn]="256"' 00:13:23.727 18:11:34 nvme_scc -- nvme/functions.sh@23 -- # nvme0[nn]=256 00:13:23.727 18:11:34 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:23.727 18:11:34 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:23.727 18:11:34 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x15d ]] 00:13:23.727 18:11:34 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[oncs]="0x15d"' 00:13:23.727 18:11:34 nvme_scc -- nvme/functions.sh@23 -- # nvme0[oncs]=0x15d 00:13:23.727 18:11:34 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:23.727 18:11:34 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:23.727 18:11:34 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:23.727 18:11:34 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[fuses]="0"' 00:13:23.727 18:11:34 nvme_scc -- nvme/functions.sh@23 -- # nvme0[fuses]=0 00:13:23.727 18:11:34 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:23.727 18:11:34 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:23.727 18:11:34 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:23.727 18:11:34 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[fna]="0"' 00:13:23.727 18:11:34 nvme_scc -- nvme/functions.sh@23 -- # nvme0[fna]=0 00:13:23.727 18:11:34 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:23.727 18:11:34 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:23.727 18:11:34 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x7 ]] 00:13:23.727 18:11:34 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[vwc]="0x7"' 00:13:23.727 18:11:34 nvme_scc -- nvme/functions.sh@23 -- # nvme0[vwc]=0x7 00:13:23.727 18:11:34 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:23.727 18:11:34 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:23.727 18:11:34 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:23.727 18:11:34 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[awun]="0"' 00:13:23.727 18:11:34 nvme_scc -- nvme/functions.sh@23 -- # nvme0[awun]=0 00:13:23.727 18:11:34 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:23.727 18:11:34 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:23.727 18:11:34 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:23.727 18:11:34 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[awupf]="0"' 00:13:23.727 18:11:34 nvme_scc -- nvme/functions.sh@23 -- # nvme0[awupf]=0 00:13:23.727 18:11:34 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:23.727 18:11:34 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:23.727 18:11:34 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:23.727 18:11:34 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[icsvscc]="0"' 00:13:23.727 18:11:34 nvme_scc -- nvme/functions.sh@23 -- # nvme0[icsvscc]=0 00:13:23.727 18:11:34 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:23.727 18:11:34 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:23.727 18:11:34 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:23.727 18:11:34 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[nwpc]="0"' 00:13:23.727 18:11:34 nvme_scc -- nvme/functions.sh@23 -- # nvme0[nwpc]=0 00:13:23.727 18:11:34 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:23.727 18:11:34 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:23.727 18:11:34 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:23.727 18:11:34 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[acwu]="0"' 00:13:23.727 18:11:34 nvme_scc -- nvme/functions.sh@23 -- # nvme0[acwu]=0 00:13:23.727 18:11:34 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:23.727 18:11:34 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:23.727 18:11:34 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:13:23.727 18:11:34 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[ocfs]="0x3"' 00:13:23.727 18:11:34 nvme_scc -- nvme/functions.sh@23 -- # nvme0[ocfs]=0x3 00:13:23.727 18:11:34 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:23.727 18:11:34 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:23.727 18:11:34 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x1 ]] 00:13:23.727 18:11:34 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[sgls]="0x1"' 00:13:23.727 18:11:34 nvme_scc -- nvme/functions.sh@23 -- # nvme0[sgls]=0x1 00:13:23.727 18:11:34 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:23.727 18:11:34 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:23.727 18:11:34 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:23.727 18:11:34 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[mnan]="0"' 00:13:23.728 18:11:34 nvme_scc -- nvme/functions.sh@23 -- # nvme0[mnan]=0 00:13:23.728 18:11:34 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:23.728 18:11:34 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:23.728 18:11:34 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:23.728 18:11:34 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[maxdna]="0"' 00:13:23.728 18:11:34 nvme_scc -- nvme/functions.sh@23 -- # nvme0[maxdna]=0 00:13:23.728 18:11:34 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:23.728 18:11:34 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:23.728 18:11:34 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:23.728 18:11:34 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[maxcna]="0"' 00:13:23.728 18:11:34 nvme_scc -- nvme/functions.sh@23 -- # nvme0[maxcna]=0 00:13:23.728 18:11:34 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:23.728 18:11:34 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:23.728 18:11:34 nvme_scc -- nvme/functions.sh@22 -- # [[ -n nqn.2019-08.org.qemu:12341 ]] 00:13:23.728 18:11:34 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[subnqn]="nqn.2019-08.org.qemu:12341"' 00:13:23.728 18:11:34 nvme_scc -- nvme/functions.sh@23 -- # nvme0[subnqn]=nqn.2019-08.org.qemu:12341 00:13:23.728 18:11:34 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:23.728 18:11:34 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:23.728 18:11:34 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:23.728 18:11:34 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[ioccsz]="0"' 00:13:23.728 18:11:34 nvme_scc -- nvme/functions.sh@23 -- # nvme0[ioccsz]=0 00:13:23.728 18:11:34 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:23.728 18:11:34 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:23.728 18:11:34 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:23.728 18:11:34 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[iorcsz]="0"' 00:13:23.728 18:11:34 nvme_scc -- nvme/functions.sh@23 -- # nvme0[iorcsz]=0 00:13:23.728 18:11:34 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:23.728 18:11:34 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:23.728 18:11:34 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:23.728 18:11:34 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[icdoff]="0"' 00:13:23.728 18:11:34 nvme_scc -- nvme/functions.sh@23 -- # nvme0[icdoff]=0 00:13:23.728 18:11:34 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:23.728 18:11:34 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:23.728 18:11:34 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:23.728 18:11:34 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[fcatt]="0"' 00:13:23.728 18:11:34 nvme_scc -- nvme/functions.sh@23 -- # nvme0[fcatt]=0 00:13:23.728 18:11:34 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:23.728 18:11:34 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:23.728 18:11:34 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:23.728 18:11:34 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[msdbd]="0"' 00:13:23.728 18:11:34 nvme_scc -- nvme/functions.sh@23 -- # nvme0[msdbd]=0 00:13:23.728 18:11:34 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:23.728 18:11:34 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:23.728 18:11:34 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:23.728 18:11:34 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[ofcs]="0"' 00:13:23.728 18:11:34 nvme_scc -- nvme/functions.sh@23 -- # nvme0[ofcs]=0 00:13:23.728 18:11:34 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:23.728 18:11:34 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:23.728 18:11:34 nvme_scc -- nvme/functions.sh@22 -- # [[ -n mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0 ]] 00:13:23.728 18:11:34 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[ps0]="mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0"' 00:13:23.728 18:11:34 nvme_scc -- nvme/functions.sh@23 -- # nvme0[ps0]='mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0' 00:13:23.728 18:11:34 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:23.728 18:11:34 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:23.728 18:11:34 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 rwl:0 idle_power:- active_power:- ]] 00:13:23.728 18:11:34 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[rwt]="0 rwl:0 idle_power:- active_power:-"' 00:13:23.728 18:11:34 nvme_scc -- nvme/functions.sh@23 -- # nvme0[rwt]='0 rwl:0 idle_power:- active_power:-' 00:13:23.728 18:11:34 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:23.728 18:11:34 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:23.728 18:11:34 nvme_scc -- nvme/functions.sh@22 -- # [[ -n - ]] 00:13:23.728 18:11:34 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[active_power_workload]="-"' 00:13:23.728 18:11:34 nvme_scc -- nvme/functions.sh@23 -- # nvme0[active_power_workload]=- 00:13:23.728 18:11:34 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:23.728 18:11:34 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:23.728 18:11:34 nvme_scc -- nvme/functions.sh@53 -- # local -n _ctrl_ns=nvme0_ns 00:13:23.728 18:11:34 nvme_scc -- nvme/functions.sh@54 -- # for ns in "$ctrl/"@("ng${ctrl##*nvme}"|"${ctrl##*/}n")* 00:13:23.728 18:11:34 nvme_scc -- nvme/functions.sh@55 -- # [[ -e /sys/class/nvme/nvme0/ng0n1 ]] 00:13:23.728 18:11:34 nvme_scc -- nvme/functions.sh@56 -- # ns_dev=ng0n1 00:13:23.728 18:11:34 nvme_scc -- nvme/functions.sh@57 -- # nvme_get ng0n1 id-ns /dev/ng0n1 00:13:23.728 18:11:34 nvme_scc -- nvme/functions.sh@17 -- # local ref=ng0n1 reg val 00:13:23.728 18:11:34 nvme_scc -- nvme/functions.sh@18 -- # shift 00:13:23.728 18:11:34 nvme_scc -- nvme/functions.sh@20 -- # local -gA 'ng0n1=()' 00:13:23.728 18:11:34 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:23.728 18:11:34 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:23.728 18:11:34 nvme_scc -- nvme/functions.sh@16 -- # /usr/local/src/nvme-cli/nvme id-ns /dev/ng0n1 00:13:23.728 18:11:34 nvme_scc -- nvme/functions.sh@22 -- # [[ -n '' ]] 00:13:23.728 18:11:34 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:23.728 18:11:34 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:23.728 18:11:34 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x140000 ]] 00:13:23.728 18:11:34 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng0n1[nsze]="0x140000"' 00:13:23.728 18:11:34 nvme_scc -- nvme/functions.sh@23 -- # ng0n1[nsze]=0x140000 00:13:23.728 18:11:34 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:23.728 18:11:34 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:23.728 18:11:34 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x140000 ]] 00:13:23.728 18:11:34 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng0n1[ncap]="0x140000"' 00:13:23.728 18:11:34 nvme_scc -- nvme/functions.sh@23 -- # ng0n1[ncap]=0x140000 00:13:23.728 18:11:34 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:23.728 18:11:34 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:23.728 18:11:34 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x140000 ]] 00:13:23.728 18:11:34 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng0n1[nuse]="0x140000"' 00:13:23.728 18:11:34 nvme_scc -- nvme/functions.sh@23 -- # ng0n1[nuse]=0x140000 00:13:23.728 18:11:34 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:23.728 18:11:34 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:23.728 18:11:34 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x14 ]] 00:13:23.728 18:11:34 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng0n1[nsfeat]="0x14"' 00:13:23.728 18:11:34 nvme_scc -- nvme/functions.sh@23 -- # ng0n1[nsfeat]=0x14 00:13:23.728 18:11:34 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:23.728 18:11:34 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:23.728 18:11:34 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 7 ]] 00:13:23.728 18:11:34 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng0n1[nlbaf]="7"' 00:13:23.728 18:11:34 nvme_scc -- nvme/functions.sh@23 -- # ng0n1[nlbaf]=7 00:13:23.728 18:11:34 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:23.728 18:11:34 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:23.728 18:11:34 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x4 ]] 00:13:23.728 18:11:34 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng0n1[flbas]="0x4"' 00:13:23.728 18:11:34 nvme_scc -- nvme/functions.sh@23 -- # ng0n1[flbas]=0x4 00:13:23.728 18:11:34 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:23.728 18:11:34 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:23.728 18:11:34 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:13:23.728 18:11:34 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng0n1[mc]="0x3"' 00:13:23.728 18:11:34 nvme_scc -- nvme/functions.sh@23 -- # ng0n1[mc]=0x3 00:13:23.728 18:11:34 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:23.728 18:11:34 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:23.728 18:11:34 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x1f ]] 00:13:23.728 18:11:34 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng0n1[dpc]="0x1f"' 00:13:23.728 18:11:34 nvme_scc -- nvme/functions.sh@23 -- # ng0n1[dpc]=0x1f 00:13:23.728 18:11:34 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:23.728 18:11:34 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:23.728 18:11:34 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:23.728 18:11:34 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng0n1[dps]="0"' 00:13:23.728 18:11:34 nvme_scc -- nvme/functions.sh@23 -- # ng0n1[dps]=0 00:13:23.728 18:11:34 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:23.728 18:11:34 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:23.728 18:11:34 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:23.728 18:11:34 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng0n1[nmic]="0"' 00:13:23.728 18:11:34 nvme_scc -- nvme/functions.sh@23 -- # ng0n1[nmic]=0 00:13:23.728 18:11:34 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:23.728 18:11:34 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:23.728 18:11:34 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:23.728 18:11:34 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng0n1[rescap]="0"' 00:13:23.728 18:11:34 nvme_scc -- nvme/functions.sh@23 -- # ng0n1[rescap]=0 00:13:23.728 18:11:34 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:23.729 18:11:34 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:23.729 18:11:34 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:23.729 18:11:34 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng0n1[fpi]="0"' 00:13:23.729 18:11:34 nvme_scc -- nvme/functions.sh@23 -- # ng0n1[fpi]=0 00:13:23.729 18:11:34 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:23.729 18:11:34 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:23.729 18:11:34 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:13:23.729 18:11:34 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng0n1[dlfeat]="1"' 00:13:23.729 18:11:34 nvme_scc -- nvme/functions.sh@23 -- # ng0n1[dlfeat]=1 00:13:23.729 18:11:34 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:23.729 18:11:34 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:23.729 18:11:34 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:23.729 18:11:34 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng0n1[nawun]="0"' 00:13:23.729 18:11:34 nvme_scc -- nvme/functions.sh@23 -- # ng0n1[nawun]=0 00:13:23.729 18:11:34 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:23.729 18:11:34 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:23.729 18:11:34 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:23.729 18:11:34 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng0n1[nawupf]="0"' 00:13:23.729 18:11:34 nvme_scc -- nvme/functions.sh@23 -- # ng0n1[nawupf]=0 00:13:23.729 18:11:34 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:23.729 18:11:34 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:23.729 18:11:34 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:23.729 18:11:34 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng0n1[nacwu]="0"' 00:13:23.729 18:11:34 nvme_scc -- nvme/functions.sh@23 -- # ng0n1[nacwu]=0 00:13:23.729 18:11:34 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:23.729 18:11:34 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:23.729 18:11:34 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:23.729 18:11:34 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng0n1[nabsn]="0"' 00:13:23.729 18:11:34 nvme_scc -- nvme/functions.sh@23 -- # ng0n1[nabsn]=0 00:13:23.729 18:11:34 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:23.729 18:11:34 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:23.729 18:11:34 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:23.729 18:11:34 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng0n1[nabo]="0"' 00:13:23.729 18:11:34 nvme_scc -- nvme/functions.sh@23 -- # ng0n1[nabo]=0 00:13:23.729 18:11:34 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:23.729 18:11:34 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:23.729 18:11:34 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:23.729 18:11:34 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng0n1[nabspf]="0"' 00:13:23.729 18:11:34 nvme_scc -- nvme/functions.sh@23 -- # ng0n1[nabspf]=0 00:13:23.729 18:11:34 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:23.729 18:11:34 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:23.729 18:11:34 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:23.729 18:11:34 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng0n1[noiob]="0"' 00:13:23.729 18:11:34 nvme_scc -- nvme/functions.sh@23 -- # ng0n1[noiob]=0 00:13:23.729 18:11:34 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:23.729 18:11:34 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:23.729 18:11:34 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:23.729 18:11:34 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng0n1[nvmcap]="0"' 00:13:23.729 18:11:34 nvme_scc -- nvme/functions.sh@23 -- # ng0n1[nvmcap]=0 00:13:23.729 18:11:34 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:23.729 18:11:34 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:23.729 18:11:34 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:23.729 18:11:34 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng0n1[npwg]="0"' 00:13:23.729 18:11:34 nvme_scc -- nvme/functions.sh@23 -- # ng0n1[npwg]=0 00:13:23.729 18:11:34 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:23.729 18:11:34 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:23.729 18:11:34 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:23.729 18:11:34 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng0n1[npwa]="0"' 00:13:23.729 18:11:34 nvme_scc -- nvme/functions.sh@23 -- # ng0n1[npwa]=0 00:13:23.729 18:11:34 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:23.729 18:11:34 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:23.729 18:11:34 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:23.729 18:11:34 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng0n1[npdg]="0"' 00:13:23.729 18:11:34 nvme_scc -- nvme/functions.sh@23 -- # ng0n1[npdg]=0 00:13:23.729 18:11:34 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:23.729 18:11:34 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:23.729 18:11:34 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:23.729 18:11:34 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng0n1[npda]="0"' 00:13:23.729 18:11:34 nvme_scc -- nvme/functions.sh@23 -- # ng0n1[npda]=0 00:13:23.729 18:11:34 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:23.729 18:11:34 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:23.729 18:11:34 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:23.729 18:11:34 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng0n1[nows]="0"' 00:13:23.729 18:11:34 nvme_scc -- nvme/functions.sh@23 -- # ng0n1[nows]=0 00:13:23.729 18:11:34 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:23.729 18:11:34 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:23.729 18:11:34 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:13:23.729 18:11:34 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng0n1[mssrl]="128"' 00:13:23.729 18:11:34 nvme_scc -- nvme/functions.sh@23 -- # ng0n1[mssrl]=128 00:13:23.729 18:11:34 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:23.729 18:11:34 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:23.729 18:11:34 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:13:23.729 18:11:34 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng0n1[mcl]="128"' 00:13:23.729 18:11:34 nvme_scc -- nvme/functions.sh@23 -- # ng0n1[mcl]=128 00:13:23.729 18:11:34 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:23.729 18:11:34 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:23.729 18:11:34 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 127 ]] 00:13:23.729 18:11:34 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng0n1[msrc]="127"' 00:13:23.729 18:11:34 nvme_scc -- nvme/functions.sh@23 -- # ng0n1[msrc]=127 00:13:23.729 18:11:34 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:23.729 18:11:34 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:23.729 18:11:34 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:23.729 18:11:34 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng0n1[nulbaf]="0"' 00:13:23.729 18:11:34 nvme_scc -- nvme/functions.sh@23 -- # ng0n1[nulbaf]=0 00:13:23.729 18:11:34 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:23.729 18:11:34 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:23.729 18:11:34 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:23.729 18:11:34 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng0n1[anagrpid]="0"' 00:13:23.729 18:11:34 nvme_scc -- nvme/functions.sh@23 -- # ng0n1[anagrpid]=0 00:13:23.729 18:11:34 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:23.729 18:11:34 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:23.729 18:11:34 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:23.729 18:11:34 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng0n1[nsattr]="0"' 00:13:23.729 18:11:34 nvme_scc -- nvme/functions.sh@23 -- # ng0n1[nsattr]=0 00:13:23.729 18:11:34 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:23.729 18:11:34 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:23.729 18:11:34 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:23.729 18:11:34 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng0n1[nvmsetid]="0"' 00:13:23.729 18:11:34 nvme_scc -- nvme/functions.sh@23 -- # ng0n1[nvmsetid]=0 00:13:23.729 18:11:34 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:23.729 18:11:34 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:23.729 18:11:34 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:23.729 18:11:34 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng0n1[endgid]="0"' 00:13:23.729 18:11:34 nvme_scc -- nvme/functions.sh@23 -- # ng0n1[endgid]=0 00:13:23.729 18:11:34 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:23.729 18:11:34 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:23.729 18:11:34 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 00000000000000000000000000000000 ]] 00:13:23.729 18:11:34 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng0n1[nguid]="00000000000000000000000000000000"' 00:13:23.729 18:11:34 nvme_scc -- nvme/functions.sh@23 -- # ng0n1[nguid]=00000000000000000000000000000000 00:13:23.729 18:11:34 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:23.729 18:11:34 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:23.729 18:11:34 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0000000000000000 ]] 00:13:23.729 18:11:34 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng0n1[eui64]="0000000000000000"' 00:13:23.729 18:11:34 nvme_scc -- nvme/functions.sh@23 -- # ng0n1[eui64]=0000000000000000 00:13:23.729 18:11:34 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:23.729 18:11:34 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:23.729 18:11:34 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:9 rp:0 ]] 00:13:23.729 18:11:34 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng0n1[lbaf0]="ms:0 lbads:9 rp:0 "' 00:13:23.729 18:11:34 nvme_scc -- nvme/functions.sh@23 -- # ng0n1[lbaf0]='ms:0 lbads:9 rp:0 ' 00:13:23.729 18:11:34 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:23.729 18:11:34 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:23.729 18:11:34 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:9 rp:0 ]] 00:13:23.729 18:11:34 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng0n1[lbaf1]="ms:8 lbads:9 rp:0 "' 00:13:23.729 18:11:34 nvme_scc -- nvme/functions.sh@23 -- # ng0n1[lbaf1]='ms:8 lbads:9 rp:0 ' 00:13:23.729 18:11:34 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:23.729 18:11:34 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:23.729 18:11:34 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:9 rp:0 ]] 00:13:23.729 18:11:34 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng0n1[lbaf2]="ms:16 lbads:9 rp:0 "' 00:13:23.729 18:11:34 nvme_scc -- nvme/functions.sh@23 -- # ng0n1[lbaf2]='ms:16 lbads:9 rp:0 ' 00:13:23.729 18:11:34 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:23.729 18:11:34 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:23.729 18:11:34 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:9 rp:0 ]] 00:13:23.729 18:11:34 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng0n1[lbaf3]="ms:64 lbads:9 rp:0 "' 00:13:23.729 18:11:34 nvme_scc -- nvme/functions.sh@23 -- # ng0n1[lbaf3]='ms:64 lbads:9 rp:0 ' 00:13:23.729 18:11:34 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:23.729 18:11:34 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:23.729 18:11:34 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:12 rp:0 (in use) ]] 00:13:23.729 18:11:34 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng0n1[lbaf4]="ms:0 lbads:12 rp:0 (in use)"' 00:13:23.730 18:11:34 nvme_scc -- nvme/functions.sh@23 -- # ng0n1[lbaf4]='ms:0 lbads:12 rp:0 (in use)' 00:13:23.730 18:11:34 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:23.730 18:11:34 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:23.730 18:11:34 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:12 rp:0 ]] 00:13:23.730 18:11:34 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng0n1[lbaf5]="ms:8 lbads:12 rp:0 "' 00:13:23.730 18:11:34 nvme_scc -- nvme/functions.sh@23 -- # ng0n1[lbaf5]='ms:8 lbads:12 rp:0 ' 00:13:23.730 18:11:34 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:23.730 18:11:34 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:23.730 18:11:34 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:12 rp:0 ]] 00:13:23.730 18:11:34 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng0n1[lbaf6]="ms:16 lbads:12 rp:0 "' 00:13:23.730 18:11:34 nvme_scc -- nvme/functions.sh@23 -- # ng0n1[lbaf6]='ms:16 lbads:12 rp:0 ' 00:13:23.730 18:11:34 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:23.730 18:11:34 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:23.730 18:11:34 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:12 rp:0 ]] 00:13:23.730 18:11:34 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng0n1[lbaf7]="ms:64 lbads:12 rp:0 "' 00:13:23.730 18:11:34 nvme_scc -- nvme/functions.sh@23 -- # ng0n1[lbaf7]='ms:64 lbads:12 rp:0 ' 00:13:23.730 18:11:34 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:23.730 18:11:34 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:23.730 18:11:34 nvme_scc -- nvme/functions.sh@58 -- # _ctrl_ns[${ns##*n}]=ng0n1 00:13:23.730 18:11:34 nvme_scc -- nvme/functions.sh@54 -- # for ns in "$ctrl/"@("ng${ctrl##*nvme}"|"${ctrl##*/}n")* 00:13:23.730 18:11:34 nvme_scc -- nvme/functions.sh@55 -- # [[ -e /sys/class/nvme/nvme0/nvme0n1 ]] 00:13:23.730 18:11:34 nvme_scc -- nvme/functions.sh@56 -- # ns_dev=nvme0n1 00:13:23.730 18:11:34 nvme_scc -- nvme/functions.sh@57 -- # nvme_get nvme0n1 id-ns /dev/nvme0n1 00:13:23.730 18:11:34 nvme_scc -- nvme/functions.sh@17 -- # local ref=nvme0n1 reg val 00:13:23.730 18:11:34 nvme_scc -- nvme/functions.sh@18 -- # shift 00:13:23.730 18:11:34 nvme_scc -- nvme/functions.sh@20 -- # local -gA 'nvme0n1=()' 00:13:23.730 18:11:34 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:23.730 18:11:34 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:23.730 18:11:34 nvme_scc -- nvme/functions.sh@16 -- # /usr/local/src/nvme-cli/nvme id-ns /dev/nvme0n1 00:13:23.730 18:11:34 nvme_scc -- nvme/functions.sh@22 -- # [[ -n '' ]] 00:13:23.730 18:11:34 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:23.730 18:11:34 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:23.730 18:11:34 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x140000 ]] 00:13:23.730 18:11:34 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[nsze]="0x140000"' 00:13:23.730 18:11:34 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[nsze]=0x140000 00:13:23.730 18:11:34 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:23.730 18:11:34 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:23.730 18:11:34 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x140000 ]] 00:13:23.730 18:11:34 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[ncap]="0x140000"' 00:13:23.730 18:11:34 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[ncap]=0x140000 00:13:23.730 18:11:34 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:23.730 18:11:34 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:23.730 18:11:34 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x140000 ]] 00:13:23.730 18:11:34 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[nuse]="0x140000"' 00:13:23.730 18:11:34 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[nuse]=0x140000 00:13:23.730 18:11:34 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:23.730 18:11:34 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:23.730 18:11:34 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x14 ]] 00:13:23.730 18:11:34 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[nsfeat]="0x14"' 00:13:23.730 18:11:34 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[nsfeat]=0x14 00:13:23.730 18:11:34 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:23.730 18:11:34 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:23.730 18:11:34 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 7 ]] 00:13:23.730 18:11:34 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[nlbaf]="7"' 00:13:23.730 18:11:34 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[nlbaf]=7 00:13:23.730 18:11:34 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:23.730 18:11:34 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:23.730 18:11:34 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x4 ]] 00:13:23.730 18:11:34 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[flbas]="0x4"' 00:13:23.730 18:11:34 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[flbas]=0x4 00:13:23.730 18:11:34 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:23.730 18:11:34 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:23.730 18:11:34 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:13:23.730 18:11:34 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[mc]="0x3"' 00:13:23.730 18:11:34 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[mc]=0x3 00:13:23.730 18:11:34 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:23.730 18:11:34 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:23.730 18:11:34 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x1f ]] 00:13:23.730 18:11:34 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[dpc]="0x1f"' 00:13:23.730 18:11:34 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[dpc]=0x1f 00:13:23.730 18:11:34 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:23.730 18:11:34 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:23.730 18:11:34 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:23.730 18:11:34 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[dps]="0"' 00:13:23.730 18:11:34 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[dps]=0 00:13:23.730 18:11:34 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:23.730 18:11:34 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:23.730 18:11:34 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:23.730 18:11:34 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[nmic]="0"' 00:13:23.730 18:11:34 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[nmic]=0 00:13:23.730 18:11:34 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:23.730 18:11:34 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:23.730 18:11:34 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:23.730 18:11:34 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[rescap]="0"' 00:13:23.730 18:11:34 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[rescap]=0 00:13:23.730 18:11:34 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:23.730 18:11:34 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:23.730 18:11:34 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:23.730 18:11:34 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[fpi]="0"' 00:13:23.730 18:11:34 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[fpi]=0 00:13:23.730 18:11:34 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:23.730 18:11:34 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:23.730 18:11:34 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:13:23.730 18:11:34 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[dlfeat]="1"' 00:13:23.730 18:11:34 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[dlfeat]=1 00:13:23.730 18:11:34 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:23.730 18:11:34 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:23.730 18:11:34 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:23.730 18:11:34 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[nawun]="0"' 00:13:23.730 18:11:34 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[nawun]=0 00:13:23.730 18:11:34 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:23.730 18:11:34 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:23.730 18:11:34 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:23.730 18:11:34 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[nawupf]="0"' 00:13:23.730 18:11:34 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[nawupf]=0 00:13:23.730 18:11:34 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:23.730 18:11:34 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:23.730 18:11:34 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:23.730 18:11:34 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[nacwu]="0"' 00:13:23.730 18:11:34 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[nacwu]=0 00:13:23.730 18:11:34 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:23.730 18:11:34 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:23.730 18:11:34 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:23.730 18:11:34 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[nabsn]="0"' 00:13:23.730 18:11:34 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[nabsn]=0 00:13:23.730 18:11:34 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:23.730 18:11:34 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:23.730 18:11:34 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:23.730 18:11:34 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[nabo]="0"' 00:13:23.730 18:11:34 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[nabo]=0 00:13:23.730 18:11:34 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:23.730 18:11:34 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:23.730 18:11:34 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:23.730 18:11:34 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[nabspf]="0"' 00:13:23.730 18:11:34 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[nabspf]=0 00:13:23.730 18:11:34 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:23.730 18:11:34 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:23.731 18:11:34 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:23.731 18:11:34 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[noiob]="0"' 00:13:23.731 18:11:34 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[noiob]=0 00:13:23.731 18:11:34 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:23.731 18:11:34 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:23.731 18:11:34 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:23.731 18:11:34 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[nvmcap]="0"' 00:13:23.731 18:11:34 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[nvmcap]=0 00:13:23.731 18:11:34 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:23.731 18:11:34 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:23.731 18:11:34 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:23.731 18:11:34 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[npwg]="0"' 00:13:23.731 18:11:34 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[npwg]=0 00:13:23.731 18:11:34 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:23.731 18:11:34 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:23.731 18:11:34 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:23.731 18:11:34 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[npwa]="0"' 00:13:23.731 18:11:34 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[npwa]=0 00:13:23.731 18:11:34 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:23.731 18:11:34 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:23.731 18:11:34 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:23.731 18:11:34 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[npdg]="0"' 00:13:23.731 18:11:34 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[npdg]=0 00:13:23.731 18:11:34 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:23.731 18:11:34 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:23.731 18:11:34 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:23.731 18:11:34 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[npda]="0"' 00:13:23.731 18:11:34 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[npda]=0 00:13:23.731 18:11:34 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:23.731 18:11:34 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:23.731 18:11:34 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:23.731 18:11:34 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[nows]="0"' 00:13:23.731 18:11:34 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[nows]=0 00:13:23.731 18:11:34 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:23.731 18:11:34 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:23.731 18:11:34 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:13:23.731 18:11:34 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[mssrl]="128"' 00:13:23.731 18:11:34 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[mssrl]=128 00:13:23.731 18:11:34 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:23.731 18:11:34 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:23.731 18:11:34 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:13:23.731 18:11:34 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[mcl]="128"' 00:13:23.731 18:11:34 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[mcl]=128 00:13:23.731 18:11:34 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:23.731 18:11:34 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:23.731 18:11:34 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 127 ]] 00:13:23.731 18:11:34 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[msrc]="127"' 00:13:23.731 18:11:34 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[msrc]=127 00:13:23.731 18:11:34 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:23.731 18:11:34 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:23.731 18:11:34 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:23.731 18:11:34 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[nulbaf]="0"' 00:13:23.731 18:11:34 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[nulbaf]=0 00:13:23.731 18:11:34 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:23.731 18:11:34 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:23.731 18:11:34 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:23.731 18:11:34 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[anagrpid]="0"' 00:13:23.731 18:11:34 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[anagrpid]=0 00:13:23.731 18:11:34 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:23.731 18:11:34 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:23.731 18:11:34 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:23.731 18:11:34 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[nsattr]="0"' 00:13:23.731 18:11:34 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[nsattr]=0 00:13:23.731 18:11:34 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:23.731 18:11:34 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:23.731 18:11:34 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:23.731 18:11:34 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[nvmsetid]="0"' 00:13:23.731 18:11:34 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[nvmsetid]=0 00:13:23.731 18:11:34 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:23.731 18:11:34 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:23.731 18:11:34 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:23.731 18:11:34 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[endgid]="0"' 00:13:23.731 18:11:34 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[endgid]=0 00:13:23.731 18:11:34 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:23.731 18:11:34 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:23.731 18:11:34 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 00000000000000000000000000000000 ]] 00:13:23.731 18:11:34 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[nguid]="00000000000000000000000000000000"' 00:13:23.731 18:11:34 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[nguid]=00000000000000000000000000000000 00:13:23.731 18:11:34 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:23.731 18:11:34 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:23.731 18:11:34 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0000000000000000 ]] 00:13:23.731 18:11:34 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[eui64]="0000000000000000"' 00:13:23.731 18:11:34 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[eui64]=0000000000000000 00:13:23.731 18:11:34 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:23.731 18:11:34 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:23.731 18:11:34 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:9 rp:0 ]] 00:13:23.731 18:11:34 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[lbaf0]="ms:0 lbads:9 rp:0 "' 00:13:23.731 18:11:34 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[lbaf0]='ms:0 lbads:9 rp:0 ' 00:13:23.731 18:11:34 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:23.731 18:11:34 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:23.731 18:11:34 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:9 rp:0 ]] 00:13:23.731 18:11:34 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[lbaf1]="ms:8 lbads:9 rp:0 "' 00:13:23.731 18:11:34 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[lbaf1]='ms:8 lbads:9 rp:0 ' 00:13:23.731 18:11:34 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:23.731 18:11:34 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:23.731 18:11:34 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:9 rp:0 ]] 00:13:23.731 18:11:34 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[lbaf2]="ms:16 lbads:9 rp:0 "' 00:13:23.731 18:11:34 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[lbaf2]='ms:16 lbads:9 rp:0 ' 00:13:23.731 18:11:34 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:23.731 18:11:34 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:23.731 18:11:34 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:9 rp:0 ]] 00:13:23.731 18:11:34 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[lbaf3]="ms:64 lbads:9 rp:0 "' 00:13:23.731 18:11:34 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[lbaf3]='ms:64 lbads:9 rp:0 ' 00:13:23.731 18:11:34 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:23.731 18:11:34 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:23.731 18:11:34 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:12 rp:0 (in use) ]] 00:13:23.731 18:11:34 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[lbaf4]="ms:0 lbads:12 rp:0 (in use)"' 00:13:23.731 18:11:34 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[lbaf4]='ms:0 lbads:12 rp:0 (in use)' 00:13:23.731 18:11:34 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:23.731 18:11:34 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:23.731 18:11:34 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:12 rp:0 ]] 00:13:23.731 18:11:34 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[lbaf5]="ms:8 lbads:12 rp:0 "' 00:13:23.731 18:11:34 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[lbaf5]='ms:8 lbads:12 rp:0 ' 00:13:23.731 18:11:34 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:23.731 18:11:34 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:23.731 18:11:34 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:12 rp:0 ]] 00:13:23.731 18:11:34 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[lbaf6]="ms:16 lbads:12 rp:0 "' 00:13:23.731 18:11:34 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[lbaf6]='ms:16 lbads:12 rp:0 ' 00:13:23.731 18:11:34 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:23.731 18:11:34 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:23.731 18:11:34 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:12 rp:0 ]] 00:13:23.731 18:11:34 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[lbaf7]="ms:64 lbads:12 rp:0 "' 00:13:23.731 18:11:34 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[lbaf7]='ms:64 lbads:12 rp:0 ' 00:13:23.731 18:11:34 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:23.731 18:11:34 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:23.731 18:11:34 nvme_scc -- nvme/functions.sh@58 -- # _ctrl_ns[${ns##*n}]=nvme0n1 00:13:23.731 18:11:34 nvme_scc -- nvme/functions.sh@60 -- # ctrls["$ctrl_dev"]=nvme0 00:13:23.731 18:11:34 nvme_scc -- nvme/functions.sh@61 -- # nvmes["$ctrl_dev"]=nvme0_ns 00:13:23.731 18:11:34 nvme_scc -- nvme/functions.sh@62 -- # bdfs["$ctrl_dev"]=0000:00:11.0 00:13:23.731 18:11:34 nvme_scc -- nvme/functions.sh@63 -- # ordered_ctrls[${ctrl_dev/nvme/}]=nvme0 00:13:23.731 18:11:34 nvme_scc -- nvme/functions.sh@47 -- # for ctrl in /sys/class/nvme/nvme* 00:13:23.731 18:11:34 nvme_scc -- nvme/functions.sh@48 -- # [[ -e /sys/class/nvme/nvme1 ]] 00:13:23.731 18:11:34 nvme_scc -- nvme/functions.sh@49 -- # pci=0000:00:10.0 00:13:23.731 18:11:34 nvme_scc -- nvme/functions.sh@50 -- # pci_can_use 0000:00:10.0 00:13:23.731 18:11:34 nvme_scc -- scripts/common.sh@18 -- # local i 00:13:23.731 18:11:34 nvme_scc -- scripts/common.sh@21 -- # [[ =~ 0000:00:10.0 ]] 00:13:23.731 18:11:34 nvme_scc -- scripts/common.sh@25 -- # [[ -z '' ]] 00:13:23.731 18:11:34 nvme_scc -- scripts/common.sh@27 -- # return 0 00:13:23.731 18:11:34 nvme_scc -- nvme/functions.sh@51 -- # ctrl_dev=nvme1 00:13:23.731 18:11:34 nvme_scc -- nvme/functions.sh@52 -- # nvme_get nvme1 id-ctrl /dev/nvme1 00:13:23.731 18:11:34 nvme_scc -- nvme/functions.sh@17 -- # local ref=nvme1 reg val 00:13:23.731 18:11:34 nvme_scc -- nvme/functions.sh@18 -- # shift 00:13:23.731 18:11:34 nvme_scc -- nvme/functions.sh@20 -- # local -gA 'nvme1=()' 00:13:23.731 18:11:34 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:23.731 18:11:34 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:23.731 18:11:34 nvme_scc -- nvme/functions.sh@16 -- # /usr/local/src/nvme-cli/nvme id-ctrl /dev/nvme1 00:13:23.731 18:11:34 nvme_scc -- nvme/functions.sh@22 -- # [[ -n '' ]] 00:13:23.731 18:11:34 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:23.731 18:11:34 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:23.732 18:11:34 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x1b36 ]] 00:13:23.732 18:11:34 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[vid]="0x1b36"' 00:13:23.732 18:11:34 nvme_scc -- nvme/functions.sh@23 -- # nvme1[vid]=0x1b36 00:13:23.732 18:11:34 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:23.732 18:11:34 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:23.732 18:11:34 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x1af4 ]] 00:13:23.732 18:11:34 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[ssvid]="0x1af4"' 00:13:23.732 18:11:34 nvme_scc -- nvme/functions.sh@23 -- # nvme1[ssvid]=0x1af4 00:13:23.732 18:11:34 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:23.732 18:11:34 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:23.732 18:11:34 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 12340 ]] 00:13:23.732 18:11:34 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[sn]="12340 "' 00:13:23.732 18:11:34 nvme_scc -- nvme/functions.sh@23 -- # nvme1[sn]='12340 ' 00:13:23.732 18:11:34 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:23.732 18:11:34 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:23.732 18:11:34 nvme_scc -- nvme/functions.sh@22 -- # [[ -n QEMU NVMe Ctrl ]] 00:13:23.732 18:11:34 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[mn]="QEMU NVMe Ctrl "' 00:13:23.732 18:11:34 nvme_scc -- nvme/functions.sh@23 -- # nvme1[mn]='QEMU NVMe Ctrl ' 00:13:23.732 18:11:34 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:23.732 18:11:34 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:23.732 18:11:34 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 8.0.0 ]] 00:13:23.732 18:11:34 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[fr]="8.0.0 "' 00:13:23.732 18:11:34 nvme_scc -- nvme/functions.sh@23 -- # nvme1[fr]='8.0.0 ' 00:13:23.732 18:11:34 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:23.732 18:11:34 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:23.732 18:11:34 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 6 ]] 00:13:23.732 18:11:34 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[rab]="6"' 00:13:23.732 18:11:34 nvme_scc -- nvme/functions.sh@23 -- # nvme1[rab]=6 00:13:23.732 18:11:34 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:23.732 18:11:34 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:23.732 18:11:34 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 525400 ]] 00:13:23.732 18:11:34 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[ieee]="525400"' 00:13:23.732 18:11:34 nvme_scc -- nvme/functions.sh@23 -- # nvme1[ieee]=525400 00:13:23.732 18:11:34 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:23.732 18:11:34 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:23.732 18:11:34 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:23.732 18:11:34 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[cmic]="0"' 00:13:23.732 18:11:34 nvme_scc -- nvme/functions.sh@23 -- # nvme1[cmic]=0 00:13:23.732 18:11:34 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:23.732 18:11:34 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:23.732 18:11:34 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 7 ]] 00:13:23.732 18:11:34 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[mdts]="7"' 00:13:23.732 18:11:34 nvme_scc -- nvme/functions.sh@23 -- # nvme1[mdts]=7 00:13:23.732 18:11:34 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:23.732 18:11:34 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:23.732 18:11:34 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:23.732 18:11:34 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[cntlid]="0"' 00:13:23.732 18:11:34 nvme_scc -- nvme/functions.sh@23 -- # nvme1[cntlid]=0 00:13:23.732 18:11:34 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:23.732 18:11:34 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:23.732 18:11:34 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x10400 ]] 00:13:23.732 18:11:34 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[ver]="0x10400"' 00:13:23.732 18:11:34 nvme_scc -- nvme/functions.sh@23 -- # nvme1[ver]=0x10400 00:13:23.732 18:11:34 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:23.732 18:11:34 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:23.732 18:11:34 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:23.732 18:11:34 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[rtd3r]="0"' 00:13:23.732 18:11:34 nvme_scc -- nvme/functions.sh@23 -- # nvme1[rtd3r]=0 00:13:23.732 18:11:34 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:23.732 18:11:34 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:23.732 18:11:34 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:23.732 18:11:34 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[rtd3e]="0"' 00:13:23.732 18:11:34 nvme_scc -- nvme/functions.sh@23 -- # nvme1[rtd3e]=0 00:13:23.732 18:11:34 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:23.732 18:11:34 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:23.732 18:11:34 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x100 ]] 00:13:23.732 18:11:34 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[oaes]="0x100"' 00:13:23.732 18:11:34 nvme_scc -- nvme/functions.sh@23 -- # nvme1[oaes]=0x100 00:13:23.732 18:11:34 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:23.732 18:11:34 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:23.732 18:11:34 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x8000 ]] 00:13:23.732 18:11:34 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[ctratt]="0x8000"' 00:13:23.732 18:11:34 nvme_scc -- nvme/functions.sh@23 -- # nvme1[ctratt]=0x8000 00:13:23.732 18:11:34 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:23.732 18:11:34 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:23.732 18:11:34 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:23.732 18:11:34 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[rrls]="0"' 00:13:23.732 18:11:34 nvme_scc -- nvme/functions.sh@23 -- # nvme1[rrls]=0 00:13:23.732 18:11:34 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:23.732 18:11:34 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:23.732 18:11:34 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:13:23.732 18:11:34 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[cntrltype]="1"' 00:13:23.732 18:11:34 nvme_scc -- nvme/functions.sh@23 -- # nvme1[cntrltype]=1 00:13:23.732 18:11:34 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:23.732 18:11:34 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:23.732 18:11:34 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 00000000-0000-0000-0000-000000000000 ]] 00:13:23.732 18:11:34 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[fguid]="00000000-0000-0000-0000-000000000000"' 00:13:23.732 18:11:34 nvme_scc -- nvme/functions.sh@23 -- # nvme1[fguid]=00000000-0000-0000-0000-000000000000 00:13:23.732 18:11:34 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:23.732 18:11:34 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:23.732 18:11:34 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:23.732 18:11:34 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[crdt1]="0"' 00:13:23.732 18:11:34 nvme_scc -- nvme/functions.sh@23 -- # nvme1[crdt1]=0 00:13:23.732 18:11:34 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:23.732 18:11:34 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:23.732 18:11:34 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:23.732 18:11:34 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[crdt2]="0"' 00:13:23.732 18:11:34 nvme_scc -- nvme/functions.sh@23 -- # nvme1[crdt2]=0 00:13:23.732 18:11:34 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:23.732 18:11:34 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:23.732 18:11:34 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:23.732 18:11:34 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[crdt3]="0"' 00:13:23.732 18:11:34 nvme_scc -- nvme/functions.sh@23 -- # nvme1[crdt3]=0 00:13:23.732 18:11:34 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:23.732 18:11:34 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:23.732 18:11:34 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:23.732 18:11:34 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[nvmsr]="0"' 00:13:23.732 18:11:34 nvme_scc -- nvme/functions.sh@23 -- # nvme1[nvmsr]=0 00:13:23.732 18:11:34 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:23.732 18:11:34 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:23.732 18:11:34 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:23.732 18:11:34 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[vwci]="0"' 00:13:23.732 18:11:34 nvme_scc -- nvme/functions.sh@23 -- # nvme1[vwci]=0 00:13:23.732 18:11:34 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:23.732 18:11:34 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:23.732 18:11:34 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:23.732 18:11:34 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[mec]="0"' 00:13:23.732 18:11:34 nvme_scc -- nvme/functions.sh@23 -- # nvme1[mec]=0 00:13:23.732 18:11:34 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:23.732 18:11:34 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:23.732 18:11:34 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x12a ]] 00:13:23.732 18:11:34 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[oacs]="0x12a"' 00:13:23.732 18:11:34 nvme_scc -- nvme/functions.sh@23 -- # nvme1[oacs]=0x12a 00:13:23.732 18:11:34 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:23.732 18:11:34 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:23.732 18:11:34 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 3 ]] 00:13:23.732 18:11:34 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[acl]="3"' 00:13:23.732 18:11:34 nvme_scc -- nvme/functions.sh@23 -- # nvme1[acl]=3 00:13:23.732 18:11:34 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:23.732 18:11:34 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:23.732 18:11:34 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 3 ]] 00:13:23.732 18:11:34 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[aerl]="3"' 00:13:23.732 18:11:34 nvme_scc -- nvme/functions.sh@23 -- # nvme1[aerl]=3 00:13:23.732 18:11:34 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:23.732 18:11:34 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:23.732 18:11:34 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:13:23.732 18:11:34 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[frmw]="0x3"' 00:13:23.732 18:11:34 nvme_scc -- nvme/functions.sh@23 -- # nvme1[frmw]=0x3 00:13:23.732 18:11:34 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:23.732 18:11:34 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:23.732 18:11:34 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x7 ]] 00:13:23.732 18:11:34 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[lpa]="0x7"' 00:13:23.732 18:11:34 nvme_scc -- nvme/functions.sh@23 -- # nvme1[lpa]=0x7 00:13:23.732 18:11:34 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:23.732 18:11:34 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:23.732 18:11:34 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:23.732 18:11:34 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[elpe]="0"' 00:13:23.732 18:11:34 nvme_scc -- nvme/functions.sh@23 -- # nvme1[elpe]=0 00:13:23.732 18:11:34 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:23.732 18:11:34 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:23.732 18:11:34 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:23.732 18:11:34 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[npss]="0"' 00:13:23.732 18:11:34 nvme_scc -- nvme/functions.sh@23 -- # nvme1[npss]=0 00:13:23.732 18:11:34 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:23.732 18:11:34 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:23.733 18:11:34 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:23.733 18:11:34 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[avscc]="0"' 00:13:23.733 18:11:34 nvme_scc -- nvme/functions.sh@23 -- # nvme1[avscc]=0 00:13:23.733 18:11:34 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:23.733 18:11:34 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:23.733 18:11:34 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:23.733 18:11:34 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[apsta]="0"' 00:13:23.733 18:11:34 nvme_scc -- nvme/functions.sh@23 -- # nvme1[apsta]=0 00:13:23.733 18:11:34 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:23.733 18:11:34 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:23.733 18:11:34 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 343 ]] 00:13:23.733 18:11:34 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[wctemp]="343"' 00:13:23.733 18:11:34 nvme_scc -- nvme/functions.sh@23 -- # nvme1[wctemp]=343 00:13:23.733 18:11:34 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:23.733 18:11:34 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:23.733 18:11:34 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 373 ]] 00:13:23.733 18:11:34 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[cctemp]="373"' 00:13:23.733 18:11:34 nvme_scc -- nvme/functions.sh@23 -- # nvme1[cctemp]=373 00:13:23.733 18:11:34 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:23.733 18:11:34 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:23.733 18:11:34 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:23.733 18:11:34 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[mtfa]="0"' 00:13:23.733 18:11:34 nvme_scc -- nvme/functions.sh@23 -- # nvme1[mtfa]=0 00:13:23.733 18:11:34 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:23.733 18:11:34 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:23.733 18:11:34 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:23.733 18:11:34 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[hmpre]="0"' 00:13:23.733 18:11:34 nvme_scc -- nvme/functions.sh@23 -- # nvme1[hmpre]=0 00:13:23.733 18:11:34 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:23.733 18:11:34 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:23.733 18:11:34 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:23.733 18:11:34 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[hmmin]="0"' 00:13:23.733 18:11:34 nvme_scc -- nvme/functions.sh@23 -- # nvme1[hmmin]=0 00:13:23.733 18:11:34 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:23.733 18:11:34 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:23.733 18:11:34 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:23.733 18:11:34 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[tnvmcap]="0"' 00:13:23.733 18:11:34 nvme_scc -- nvme/functions.sh@23 -- # nvme1[tnvmcap]=0 00:13:23.733 18:11:34 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:23.733 18:11:34 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:23.733 18:11:34 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:23.733 18:11:34 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[unvmcap]="0"' 00:13:23.733 18:11:34 nvme_scc -- nvme/functions.sh@23 -- # nvme1[unvmcap]=0 00:13:23.733 18:11:34 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:23.733 18:11:34 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:23.733 18:11:34 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:23.733 18:11:34 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[rpmbs]="0"' 00:13:23.733 18:11:34 nvme_scc -- nvme/functions.sh@23 -- # nvme1[rpmbs]=0 00:13:23.733 18:11:34 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:23.733 18:11:34 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:23.733 18:11:34 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:23.733 18:11:34 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[edstt]="0"' 00:13:23.733 18:11:34 nvme_scc -- nvme/functions.sh@23 -- # nvme1[edstt]=0 00:13:23.733 18:11:34 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:23.733 18:11:34 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:23.733 18:11:34 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:23.733 18:11:34 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[dsto]="0"' 00:13:23.733 18:11:34 nvme_scc -- nvme/functions.sh@23 -- # nvme1[dsto]=0 00:13:23.733 18:11:34 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:23.733 18:11:34 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:23.733 18:11:34 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:23.733 18:11:34 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[fwug]="0"' 00:13:23.733 18:11:34 nvme_scc -- nvme/functions.sh@23 -- # nvme1[fwug]=0 00:13:23.733 18:11:34 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:23.733 18:11:34 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:23.733 18:11:34 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:23.733 18:11:34 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[kas]="0"' 00:13:23.733 18:11:34 nvme_scc -- nvme/functions.sh@23 -- # nvme1[kas]=0 00:13:23.733 18:11:34 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:23.733 18:11:34 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:23.733 18:11:34 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:23.733 18:11:34 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[hctma]="0"' 00:13:23.733 18:11:34 nvme_scc -- nvme/functions.sh@23 -- # nvme1[hctma]=0 00:13:23.733 18:11:34 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:23.733 18:11:34 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:23.733 18:11:34 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:23.733 18:11:34 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[mntmt]="0"' 00:13:23.733 18:11:34 nvme_scc -- nvme/functions.sh@23 -- # nvme1[mntmt]=0 00:13:23.733 18:11:34 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:23.733 18:11:34 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:23.733 18:11:34 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:23.733 18:11:34 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[mxtmt]="0"' 00:13:23.733 18:11:34 nvme_scc -- nvme/functions.sh@23 -- # nvme1[mxtmt]=0 00:13:23.733 18:11:34 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:23.733 18:11:34 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:23.733 18:11:34 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:23.733 18:11:34 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[sanicap]="0"' 00:13:23.733 18:11:34 nvme_scc -- nvme/functions.sh@23 -- # nvme1[sanicap]=0 00:13:23.733 18:11:34 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:23.733 18:11:34 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:23.733 18:11:34 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:23.733 18:11:34 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[hmminds]="0"' 00:13:23.733 18:11:34 nvme_scc -- nvme/functions.sh@23 -- # nvme1[hmminds]=0 00:13:23.733 18:11:34 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:23.733 18:11:34 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:23.733 18:11:34 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:23.733 18:11:34 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[hmmaxd]="0"' 00:13:23.733 18:11:34 nvme_scc -- nvme/functions.sh@23 -- # nvme1[hmmaxd]=0 00:13:23.733 18:11:34 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:23.733 18:11:34 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:23.733 18:11:34 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:23.733 18:11:34 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[nsetidmax]="0"' 00:13:23.733 18:11:34 nvme_scc -- nvme/functions.sh@23 -- # nvme1[nsetidmax]=0 00:13:23.733 18:11:34 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:23.733 18:11:34 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:23.733 18:11:34 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:23.733 18:11:34 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[endgidmax]="0"' 00:13:23.733 18:11:34 nvme_scc -- nvme/functions.sh@23 -- # nvme1[endgidmax]=0 00:13:23.733 18:11:34 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:23.733 18:11:34 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:23.733 18:11:34 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:23.733 18:11:34 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[anatt]="0"' 00:13:23.733 18:11:34 nvme_scc -- nvme/functions.sh@23 -- # nvme1[anatt]=0 00:13:23.733 18:11:34 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:23.733 18:11:34 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:23.733 18:11:34 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:23.733 18:11:34 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[anacap]="0"' 00:13:23.733 18:11:34 nvme_scc -- nvme/functions.sh@23 -- # nvme1[anacap]=0 00:13:23.733 18:11:34 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:23.733 18:11:34 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:23.733 18:11:34 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:23.733 18:11:34 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[anagrpmax]="0"' 00:13:23.733 18:11:34 nvme_scc -- nvme/functions.sh@23 -- # nvme1[anagrpmax]=0 00:13:23.733 18:11:34 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:23.733 18:11:34 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:23.733 18:11:34 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:23.733 18:11:34 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[nanagrpid]="0"' 00:13:23.733 18:11:34 nvme_scc -- nvme/functions.sh@23 -- # nvme1[nanagrpid]=0 00:13:23.733 18:11:34 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:23.733 18:11:34 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:23.733 18:11:34 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:23.733 18:11:34 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[pels]="0"' 00:13:23.733 18:11:34 nvme_scc -- nvme/functions.sh@23 -- # nvme1[pels]=0 00:13:23.733 18:11:34 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:23.733 18:11:34 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:23.733 18:11:34 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:23.733 18:11:34 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[domainid]="0"' 00:13:23.733 18:11:34 nvme_scc -- nvme/functions.sh@23 -- # nvme1[domainid]=0 00:13:23.733 18:11:34 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:23.733 18:11:34 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:23.733 18:11:34 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:23.733 18:11:34 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[megcap]="0"' 00:13:23.733 18:11:34 nvme_scc -- nvme/functions.sh@23 -- # nvme1[megcap]=0 00:13:23.733 18:11:34 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:23.733 18:11:34 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:23.733 18:11:34 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x66 ]] 00:13:23.733 18:11:34 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[sqes]="0x66"' 00:13:23.733 18:11:34 nvme_scc -- nvme/functions.sh@23 -- # nvme1[sqes]=0x66 00:13:23.733 18:11:34 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:23.733 18:11:34 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:23.733 18:11:34 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x44 ]] 00:13:23.733 18:11:34 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[cqes]="0x44"' 00:13:23.733 18:11:34 nvme_scc -- nvme/functions.sh@23 -- # nvme1[cqes]=0x44 00:13:23.733 18:11:34 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:23.733 18:11:34 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:23.733 18:11:34 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:23.733 18:11:34 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[maxcmd]="0"' 00:13:23.734 18:11:34 nvme_scc -- nvme/functions.sh@23 -- # nvme1[maxcmd]=0 00:13:23.734 18:11:34 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:23.734 18:11:34 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:23.734 18:11:34 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 256 ]] 00:13:23.734 18:11:34 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[nn]="256"' 00:13:23.734 18:11:34 nvme_scc -- nvme/functions.sh@23 -- # nvme1[nn]=256 00:13:23.734 18:11:34 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:23.734 18:11:34 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:23.734 18:11:34 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x15d ]] 00:13:23.734 18:11:34 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[oncs]="0x15d"' 00:13:23.734 18:11:34 nvme_scc -- nvme/functions.sh@23 -- # nvme1[oncs]=0x15d 00:13:23.734 18:11:34 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:23.734 18:11:34 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:23.734 18:11:34 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:23.734 18:11:34 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[fuses]="0"' 00:13:23.734 18:11:34 nvme_scc -- nvme/functions.sh@23 -- # nvme1[fuses]=0 00:13:23.734 18:11:34 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:23.734 18:11:34 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:23.734 18:11:34 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:23.734 18:11:34 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[fna]="0"' 00:13:23.734 18:11:34 nvme_scc -- nvme/functions.sh@23 -- # nvme1[fna]=0 00:13:23.734 18:11:34 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:23.734 18:11:34 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:23.734 18:11:34 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x7 ]] 00:13:23.734 18:11:34 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[vwc]="0x7"' 00:13:23.734 18:11:34 nvme_scc -- nvme/functions.sh@23 -- # nvme1[vwc]=0x7 00:13:23.734 18:11:34 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:23.734 18:11:34 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:23.734 18:11:34 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:23.734 18:11:34 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[awun]="0"' 00:13:23.734 18:11:34 nvme_scc -- nvme/functions.sh@23 -- # nvme1[awun]=0 00:13:23.734 18:11:34 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:23.734 18:11:34 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:23.734 18:11:34 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:23.734 18:11:34 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[awupf]="0"' 00:13:23.734 18:11:34 nvme_scc -- nvme/functions.sh@23 -- # nvme1[awupf]=0 00:13:23.734 18:11:34 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:23.734 18:11:34 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:23.734 18:11:34 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:23.734 18:11:34 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[icsvscc]="0"' 00:13:23.734 18:11:34 nvme_scc -- nvme/functions.sh@23 -- # nvme1[icsvscc]=0 00:13:23.734 18:11:34 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:23.734 18:11:34 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:23.734 18:11:34 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:23.734 18:11:34 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[nwpc]="0"' 00:13:23.734 18:11:34 nvme_scc -- nvme/functions.sh@23 -- # nvme1[nwpc]=0 00:13:23.734 18:11:34 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:23.734 18:11:34 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:23.734 18:11:34 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:23.734 18:11:34 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[acwu]="0"' 00:13:23.734 18:11:34 nvme_scc -- nvme/functions.sh@23 -- # nvme1[acwu]=0 00:13:23.734 18:11:34 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:23.734 18:11:34 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:23.734 18:11:34 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:13:23.734 18:11:34 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[ocfs]="0x3"' 00:13:23.734 18:11:34 nvme_scc -- nvme/functions.sh@23 -- # nvme1[ocfs]=0x3 00:13:23.734 18:11:34 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:23.734 18:11:34 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:23.734 18:11:34 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x1 ]] 00:13:23.734 18:11:34 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[sgls]="0x1"' 00:13:23.734 18:11:34 nvme_scc -- nvme/functions.sh@23 -- # nvme1[sgls]=0x1 00:13:23.734 18:11:34 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:23.734 18:11:34 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:23.734 18:11:34 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:23.734 18:11:34 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[mnan]="0"' 00:13:23.734 18:11:34 nvme_scc -- nvme/functions.sh@23 -- # nvme1[mnan]=0 00:13:23.734 18:11:34 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:23.734 18:11:34 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:23.734 18:11:34 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:23.734 18:11:34 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[maxdna]="0"' 00:13:23.734 18:11:34 nvme_scc -- nvme/functions.sh@23 -- # nvme1[maxdna]=0 00:13:23.734 18:11:34 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:23.734 18:11:34 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:23.734 18:11:34 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:23.734 18:11:34 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[maxcna]="0"' 00:13:23.734 18:11:34 nvme_scc -- nvme/functions.sh@23 -- # nvme1[maxcna]=0 00:13:23.734 18:11:34 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:23.734 18:11:34 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:23.734 18:11:34 nvme_scc -- nvme/functions.sh@22 -- # [[ -n nqn.2019-08.org.qemu:12340 ]] 00:13:23.734 18:11:34 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[subnqn]="nqn.2019-08.org.qemu:12340"' 00:13:23.734 18:11:34 nvme_scc -- nvme/functions.sh@23 -- # nvme1[subnqn]=nqn.2019-08.org.qemu:12340 00:13:23.734 18:11:34 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:23.734 18:11:34 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:23.734 18:11:34 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:23.734 18:11:34 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[ioccsz]="0"' 00:13:23.734 18:11:34 nvme_scc -- nvme/functions.sh@23 -- # nvme1[ioccsz]=0 00:13:23.734 18:11:34 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:23.734 18:11:34 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:23.734 18:11:34 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:23.734 18:11:34 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[iorcsz]="0"' 00:13:23.734 18:11:34 nvme_scc -- nvme/functions.sh@23 -- # nvme1[iorcsz]=0 00:13:23.734 18:11:34 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:23.734 18:11:34 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:23.734 18:11:34 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:23.734 18:11:34 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[icdoff]="0"' 00:13:23.734 18:11:34 nvme_scc -- nvme/functions.sh@23 -- # nvme1[icdoff]=0 00:13:23.734 18:11:34 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:23.734 18:11:34 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:23.734 18:11:34 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:23.734 18:11:34 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[fcatt]="0"' 00:13:23.734 18:11:34 nvme_scc -- nvme/functions.sh@23 -- # nvme1[fcatt]=0 00:13:23.734 18:11:34 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:23.734 18:11:34 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:23.734 18:11:34 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:23.734 18:11:34 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[msdbd]="0"' 00:13:23.734 18:11:34 nvme_scc -- nvme/functions.sh@23 -- # nvme1[msdbd]=0 00:13:23.734 18:11:34 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:23.734 18:11:34 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:23.734 18:11:34 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:23.734 18:11:34 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[ofcs]="0"' 00:13:23.734 18:11:34 nvme_scc -- nvme/functions.sh@23 -- # nvme1[ofcs]=0 00:13:23.734 18:11:34 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:23.734 18:11:34 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:23.734 18:11:34 nvme_scc -- nvme/functions.sh@22 -- # [[ -n mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0 ]] 00:13:23.734 18:11:34 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[ps0]="mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0"' 00:13:23.734 18:11:34 nvme_scc -- nvme/functions.sh@23 -- # nvme1[ps0]='mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0' 00:13:23.734 18:11:34 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:23.734 18:11:34 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:23.734 18:11:34 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 rwl:0 idle_power:- active_power:- ]] 00:13:23.734 18:11:34 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[rwt]="0 rwl:0 idle_power:- active_power:-"' 00:13:23.734 18:11:34 nvme_scc -- nvme/functions.sh@23 -- # nvme1[rwt]='0 rwl:0 idle_power:- active_power:-' 00:13:23.734 18:11:34 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:23.734 18:11:34 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:23.734 18:11:34 nvme_scc -- nvme/functions.sh@22 -- # [[ -n - ]] 00:13:23.734 18:11:34 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[active_power_workload]="-"' 00:13:23.734 18:11:34 nvme_scc -- nvme/functions.sh@23 -- # nvme1[active_power_workload]=- 00:13:23.734 18:11:34 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:23.734 18:11:34 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:23.734 18:11:34 nvme_scc -- nvme/functions.sh@53 -- # local -n _ctrl_ns=nvme1_ns 00:13:23.734 18:11:34 nvme_scc -- nvme/functions.sh@54 -- # for ns in "$ctrl/"@("ng${ctrl##*nvme}"|"${ctrl##*/}n")* 00:13:23.734 18:11:34 nvme_scc -- nvme/functions.sh@55 -- # [[ -e /sys/class/nvme/nvme1/ng1n1 ]] 00:13:23.734 18:11:34 nvme_scc -- nvme/functions.sh@56 -- # ns_dev=ng1n1 00:13:23.734 18:11:34 nvme_scc -- nvme/functions.sh@57 -- # nvme_get ng1n1 id-ns /dev/ng1n1 00:13:23.734 18:11:34 nvme_scc -- nvme/functions.sh@17 -- # local ref=ng1n1 reg val 00:13:23.734 18:11:34 nvme_scc -- nvme/functions.sh@18 -- # shift 00:13:23.734 18:11:34 nvme_scc -- nvme/functions.sh@20 -- # local -gA 'ng1n1=()' 00:13:23.734 18:11:34 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:23.734 18:11:34 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:23.734 18:11:34 nvme_scc -- nvme/functions.sh@16 -- # /usr/local/src/nvme-cli/nvme id-ns /dev/ng1n1 00:13:23.734 18:11:34 nvme_scc -- nvme/functions.sh@22 -- # [[ -n '' ]] 00:13:23.734 18:11:34 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:23.734 18:11:34 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:23.734 18:11:34 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x17a17a ]] 00:13:23.734 18:11:34 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng1n1[nsze]="0x17a17a"' 00:13:23.734 18:11:34 nvme_scc -- nvme/functions.sh@23 -- # ng1n1[nsze]=0x17a17a 00:13:23.734 18:11:34 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:23.734 18:11:34 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:23.734 18:11:34 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x17a17a ]] 00:13:23.734 18:11:34 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng1n1[ncap]="0x17a17a"' 00:13:23.735 18:11:34 nvme_scc -- nvme/functions.sh@23 -- # ng1n1[ncap]=0x17a17a 00:13:23.735 18:11:34 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:23.735 18:11:34 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:23.735 18:11:34 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x17a17a ]] 00:13:23.735 18:11:34 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng1n1[nuse]="0x17a17a"' 00:13:23.735 18:11:34 nvme_scc -- nvme/functions.sh@23 -- # ng1n1[nuse]=0x17a17a 00:13:23.735 18:11:34 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:23.735 18:11:34 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:23.735 18:11:34 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x14 ]] 00:13:23.735 18:11:34 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng1n1[nsfeat]="0x14"' 00:13:23.735 18:11:34 nvme_scc -- nvme/functions.sh@23 -- # ng1n1[nsfeat]=0x14 00:13:23.735 18:11:34 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:23.735 18:11:34 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:23.735 18:11:34 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 7 ]] 00:13:23.735 18:11:34 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng1n1[nlbaf]="7"' 00:13:23.735 18:11:34 nvme_scc -- nvme/functions.sh@23 -- # ng1n1[nlbaf]=7 00:13:23.735 18:11:34 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:23.735 18:11:34 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:23.735 18:11:34 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x7 ]] 00:13:23.735 18:11:34 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng1n1[flbas]="0x7"' 00:13:23.735 18:11:34 nvme_scc -- nvme/functions.sh@23 -- # ng1n1[flbas]=0x7 00:13:23.735 18:11:34 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:23.735 18:11:34 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:23.735 18:11:34 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:13:23.735 18:11:34 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng1n1[mc]="0x3"' 00:13:23.735 18:11:34 nvme_scc -- nvme/functions.sh@23 -- # ng1n1[mc]=0x3 00:13:23.735 18:11:34 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:23.735 18:11:34 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:23.735 18:11:34 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x1f ]] 00:13:23.735 18:11:34 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng1n1[dpc]="0x1f"' 00:13:23.735 18:11:34 nvme_scc -- nvme/functions.sh@23 -- # ng1n1[dpc]=0x1f 00:13:23.735 18:11:34 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:23.735 18:11:34 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:23.735 18:11:34 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:23.735 18:11:34 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng1n1[dps]="0"' 00:13:23.735 18:11:34 nvme_scc -- nvme/functions.sh@23 -- # ng1n1[dps]=0 00:13:23.735 18:11:34 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:23.735 18:11:34 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:23.735 18:11:34 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:23.735 18:11:34 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng1n1[nmic]="0"' 00:13:23.735 18:11:34 nvme_scc -- nvme/functions.sh@23 -- # ng1n1[nmic]=0 00:13:23.735 18:11:34 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:23.735 18:11:34 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:23.735 18:11:34 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:23.735 18:11:34 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng1n1[rescap]="0"' 00:13:23.735 18:11:34 nvme_scc -- nvme/functions.sh@23 -- # ng1n1[rescap]=0 00:13:23.735 18:11:34 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:23.735 18:11:34 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:23.735 18:11:34 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:23.735 18:11:34 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng1n1[fpi]="0"' 00:13:23.735 18:11:34 nvme_scc -- nvme/functions.sh@23 -- # ng1n1[fpi]=0 00:13:23.735 18:11:34 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:23.735 18:11:34 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:23.735 18:11:34 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:13:23.735 18:11:34 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng1n1[dlfeat]="1"' 00:13:23.735 18:11:34 nvme_scc -- nvme/functions.sh@23 -- # ng1n1[dlfeat]=1 00:13:23.735 18:11:34 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:23.735 18:11:34 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:23.735 18:11:34 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:23.735 18:11:34 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng1n1[nawun]="0"' 00:13:23.735 18:11:34 nvme_scc -- nvme/functions.sh@23 -- # ng1n1[nawun]=0 00:13:23.735 18:11:34 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:23.735 18:11:34 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:23.735 18:11:34 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:23.735 18:11:34 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng1n1[nawupf]="0"' 00:13:23.735 18:11:34 nvme_scc -- nvme/functions.sh@23 -- # ng1n1[nawupf]=0 00:13:23.735 18:11:34 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:23.735 18:11:34 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:23.735 18:11:34 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:23.735 18:11:34 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng1n1[nacwu]="0"' 00:13:23.735 18:11:34 nvme_scc -- nvme/functions.sh@23 -- # ng1n1[nacwu]=0 00:13:23.735 18:11:34 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:23.735 18:11:34 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:23.735 18:11:34 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:23.735 18:11:34 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng1n1[nabsn]="0"' 00:13:23.735 18:11:34 nvme_scc -- nvme/functions.sh@23 -- # ng1n1[nabsn]=0 00:13:23.735 18:11:34 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:23.735 18:11:34 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:23.735 18:11:34 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:23.735 18:11:34 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng1n1[nabo]="0"' 00:13:23.735 18:11:34 nvme_scc -- nvme/functions.sh@23 -- # ng1n1[nabo]=0 00:13:23.735 18:11:34 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:23.735 18:11:34 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:23.735 18:11:34 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:23.735 18:11:34 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng1n1[nabspf]="0"' 00:13:23.735 18:11:34 nvme_scc -- nvme/functions.sh@23 -- # ng1n1[nabspf]=0 00:13:23.735 18:11:34 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:23.735 18:11:34 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:23.735 18:11:34 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:23.735 18:11:34 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng1n1[noiob]="0"' 00:13:23.735 18:11:34 nvme_scc -- nvme/functions.sh@23 -- # ng1n1[noiob]=0 00:13:23.735 18:11:34 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:23.735 18:11:34 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:23.735 18:11:34 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:23.735 18:11:34 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng1n1[nvmcap]="0"' 00:13:23.735 18:11:34 nvme_scc -- nvme/functions.sh@23 -- # ng1n1[nvmcap]=0 00:13:23.735 18:11:34 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:23.735 18:11:34 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:23.735 18:11:34 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:23.735 18:11:34 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng1n1[npwg]="0"' 00:13:23.735 18:11:34 nvme_scc -- nvme/functions.sh@23 -- # ng1n1[npwg]=0 00:13:23.735 18:11:34 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:23.735 18:11:34 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:23.735 18:11:34 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:23.735 18:11:34 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng1n1[npwa]="0"' 00:13:23.735 18:11:34 nvme_scc -- nvme/functions.sh@23 -- # ng1n1[npwa]=0 00:13:23.735 18:11:34 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:23.735 18:11:34 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:23.735 18:11:34 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:23.735 18:11:34 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng1n1[npdg]="0"' 00:13:23.735 18:11:34 nvme_scc -- nvme/functions.sh@23 -- # ng1n1[npdg]=0 00:13:23.735 18:11:34 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:23.735 18:11:34 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:23.735 18:11:34 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:23.735 18:11:34 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng1n1[npda]="0"' 00:13:23.735 18:11:34 nvme_scc -- nvme/functions.sh@23 -- # ng1n1[npda]=0 00:13:23.735 18:11:34 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:23.735 18:11:34 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:23.735 18:11:34 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:23.735 18:11:34 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng1n1[nows]="0"' 00:13:23.735 18:11:34 nvme_scc -- nvme/functions.sh@23 -- # ng1n1[nows]=0 00:13:23.735 18:11:34 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:23.735 18:11:34 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:23.735 18:11:34 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:13:23.735 18:11:34 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng1n1[mssrl]="128"' 00:13:23.735 18:11:34 nvme_scc -- nvme/functions.sh@23 -- # ng1n1[mssrl]=128 00:13:23.735 18:11:34 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:23.735 18:11:34 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:23.735 18:11:34 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:13:23.735 18:11:34 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng1n1[mcl]="128"' 00:13:23.735 18:11:34 nvme_scc -- nvme/functions.sh@23 -- # ng1n1[mcl]=128 00:13:23.735 18:11:34 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:23.735 18:11:34 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:23.735 18:11:34 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 127 ]] 00:13:23.735 18:11:34 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng1n1[msrc]="127"' 00:13:23.735 18:11:34 nvme_scc -- nvme/functions.sh@23 -- # ng1n1[msrc]=127 00:13:23.735 18:11:34 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:23.735 18:11:34 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:23.735 18:11:34 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:23.735 18:11:34 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng1n1[nulbaf]="0"' 00:13:23.735 18:11:34 nvme_scc -- nvme/functions.sh@23 -- # ng1n1[nulbaf]=0 00:13:23.735 18:11:34 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:23.735 18:11:34 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:23.736 18:11:34 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:23.736 18:11:34 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng1n1[anagrpid]="0"' 00:13:23.736 18:11:34 nvme_scc -- nvme/functions.sh@23 -- # ng1n1[anagrpid]=0 00:13:23.736 18:11:34 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:23.736 18:11:34 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:23.736 18:11:34 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:23.736 18:11:34 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng1n1[nsattr]="0"' 00:13:23.736 18:11:34 nvme_scc -- nvme/functions.sh@23 -- # ng1n1[nsattr]=0 00:13:23.736 18:11:34 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:23.736 18:11:34 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:23.736 18:11:34 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:23.736 18:11:34 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng1n1[nvmsetid]="0"' 00:13:23.736 18:11:34 nvme_scc -- nvme/functions.sh@23 -- # ng1n1[nvmsetid]=0 00:13:23.736 18:11:34 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:23.736 18:11:34 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:23.736 18:11:34 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:23.736 18:11:34 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng1n1[endgid]="0"' 00:13:23.736 18:11:34 nvme_scc -- nvme/functions.sh@23 -- # ng1n1[endgid]=0 00:13:23.736 18:11:34 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:23.736 18:11:34 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:23.736 18:11:34 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 00000000000000000000000000000000 ]] 00:13:23.736 18:11:34 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng1n1[nguid]="00000000000000000000000000000000"' 00:13:23.736 18:11:34 nvme_scc -- nvme/functions.sh@23 -- # ng1n1[nguid]=00000000000000000000000000000000 00:13:23.736 18:11:34 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:23.736 18:11:34 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:23.736 18:11:34 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0000000000000000 ]] 00:13:23.736 18:11:34 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng1n1[eui64]="0000000000000000"' 00:13:23.736 18:11:34 nvme_scc -- nvme/functions.sh@23 -- # ng1n1[eui64]=0000000000000000 00:13:23.736 18:11:34 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:23.736 18:11:34 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:23.736 18:11:34 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:9 rp:0 ]] 00:13:23.736 18:11:34 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng1n1[lbaf0]="ms:0 lbads:9 rp:0 "' 00:13:23.736 18:11:34 nvme_scc -- nvme/functions.sh@23 -- # ng1n1[lbaf0]='ms:0 lbads:9 rp:0 ' 00:13:23.736 18:11:34 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:23.736 18:11:34 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:23.736 18:11:34 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:9 rp:0 ]] 00:13:23.736 18:11:34 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng1n1[lbaf1]="ms:8 lbads:9 rp:0 "' 00:13:23.736 18:11:34 nvme_scc -- nvme/functions.sh@23 -- # ng1n1[lbaf1]='ms:8 lbads:9 rp:0 ' 00:13:23.736 18:11:34 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:23.736 18:11:34 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:23.736 18:11:34 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:9 rp:0 ]] 00:13:23.736 18:11:34 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng1n1[lbaf2]="ms:16 lbads:9 rp:0 "' 00:13:23.736 18:11:34 nvme_scc -- nvme/functions.sh@23 -- # ng1n1[lbaf2]='ms:16 lbads:9 rp:0 ' 00:13:23.736 18:11:34 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:23.736 18:11:34 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:23.736 18:11:34 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:9 rp:0 ]] 00:13:23.736 18:11:34 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng1n1[lbaf3]="ms:64 lbads:9 rp:0 "' 00:13:23.736 18:11:34 nvme_scc -- nvme/functions.sh@23 -- # ng1n1[lbaf3]='ms:64 lbads:9 rp:0 ' 00:13:23.736 18:11:34 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:23.736 18:11:34 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:23.736 18:11:34 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:12 rp:0 ]] 00:13:23.736 18:11:34 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng1n1[lbaf4]="ms:0 lbads:12 rp:0 "' 00:13:23.736 18:11:34 nvme_scc -- nvme/functions.sh@23 -- # ng1n1[lbaf4]='ms:0 lbads:12 rp:0 ' 00:13:23.736 18:11:34 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:23.736 18:11:34 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:23.736 18:11:34 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:12 rp:0 ]] 00:13:23.736 18:11:34 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng1n1[lbaf5]="ms:8 lbads:12 rp:0 "' 00:13:23.736 18:11:34 nvme_scc -- nvme/functions.sh@23 -- # ng1n1[lbaf5]='ms:8 lbads:12 rp:0 ' 00:13:23.736 18:11:34 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:23.736 18:11:34 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:23.736 18:11:34 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:12 rp:0 ]] 00:13:23.736 18:11:34 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng1n1[lbaf6]="ms:16 lbads:12 rp:0 "' 00:13:23.736 18:11:34 nvme_scc -- nvme/functions.sh@23 -- # ng1n1[lbaf6]='ms:16 lbads:12 rp:0 ' 00:13:23.736 18:11:34 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:23.736 18:11:34 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:23.736 18:11:34 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:12 rp:0 (in use) ]] 00:13:23.736 18:11:34 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng1n1[lbaf7]="ms:64 lbads:12 rp:0 (in use)"' 00:13:23.736 18:11:34 nvme_scc -- nvme/functions.sh@23 -- # ng1n1[lbaf7]='ms:64 lbads:12 rp:0 (in use)' 00:13:23.736 18:11:34 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:23.736 18:11:34 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:23.736 18:11:34 nvme_scc -- nvme/functions.sh@58 -- # _ctrl_ns[${ns##*n}]=ng1n1 00:13:23.736 18:11:34 nvme_scc -- nvme/functions.sh@54 -- # for ns in "$ctrl/"@("ng${ctrl##*nvme}"|"${ctrl##*/}n")* 00:13:23.736 18:11:34 nvme_scc -- nvme/functions.sh@55 -- # [[ -e /sys/class/nvme/nvme1/nvme1n1 ]] 00:13:23.736 18:11:34 nvme_scc -- nvme/functions.sh@56 -- # ns_dev=nvme1n1 00:13:23.736 18:11:34 nvme_scc -- nvme/functions.sh@57 -- # nvme_get nvme1n1 id-ns /dev/nvme1n1 00:13:23.736 18:11:34 nvme_scc -- nvme/functions.sh@17 -- # local ref=nvme1n1 reg val 00:13:23.736 18:11:34 nvme_scc -- nvme/functions.sh@18 -- # shift 00:13:23.736 18:11:34 nvme_scc -- nvme/functions.sh@20 -- # local -gA 'nvme1n1=()' 00:13:23.736 18:11:34 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:23.736 18:11:34 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:23.736 18:11:34 nvme_scc -- nvme/functions.sh@16 -- # /usr/local/src/nvme-cli/nvme id-ns /dev/nvme1n1 00:13:23.736 18:11:34 nvme_scc -- nvme/functions.sh@22 -- # [[ -n '' ]] 00:13:23.736 18:11:34 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:23.736 18:11:34 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:23.736 18:11:34 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x17a17a ]] 00:13:23.736 18:11:34 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[nsze]="0x17a17a"' 00:13:23.736 18:11:34 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[nsze]=0x17a17a 00:13:23.736 18:11:34 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:23.736 18:11:34 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:23.736 18:11:34 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x17a17a ]] 00:13:23.736 18:11:34 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[ncap]="0x17a17a"' 00:13:23.736 18:11:34 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[ncap]=0x17a17a 00:13:23.736 18:11:34 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:23.736 18:11:34 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:23.736 18:11:34 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x17a17a ]] 00:13:23.736 18:11:34 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[nuse]="0x17a17a"' 00:13:23.736 18:11:34 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[nuse]=0x17a17a 00:13:23.736 18:11:34 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:23.736 18:11:34 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:23.736 18:11:34 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x14 ]] 00:13:23.736 18:11:34 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[nsfeat]="0x14"' 00:13:23.736 18:11:34 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[nsfeat]=0x14 00:13:23.736 18:11:34 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:23.736 18:11:34 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:23.736 18:11:34 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 7 ]] 00:13:23.736 18:11:34 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[nlbaf]="7"' 00:13:23.736 18:11:34 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[nlbaf]=7 00:13:23.736 18:11:34 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:23.736 18:11:34 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:23.736 18:11:34 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x7 ]] 00:13:23.736 18:11:34 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[flbas]="0x7"' 00:13:23.736 18:11:34 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[flbas]=0x7 00:13:23.736 18:11:34 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:23.736 18:11:34 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:23.736 18:11:34 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:13:23.736 18:11:34 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[mc]="0x3"' 00:13:23.736 18:11:34 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[mc]=0x3 00:13:23.736 18:11:34 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:23.736 18:11:34 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:23.736 18:11:34 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x1f ]] 00:13:23.736 18:11:34 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[dpc]="0x1f"' 00:13:23.736 18:11:34 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[dpc]=0x1f 00:13:23.736 18:11:34 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:23.736 18:11:34 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:23.736 18:11:34 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:23.736 18:11:34 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[dps]="0"' 00:13:23.736 18:11:34 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[dps]=0 00:13:23.736 18:11:34 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:23.736 18:11:34 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:23.736 18:11:34 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:23.736 18:11:34 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[nmic]="0"' 00:13:23.736 18:11:34 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[nmic]=0 00:13:23.736 18:11:34 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:23.736 18:11:34 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:23.736 18:11:34 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:23.736 18:11:34 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[rescap]="0"' 00:13:23.736 18:11:34 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[rescap]=0 00:13:23.736 18:11:34 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:23.736 18:11:34 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:23.736 18:11:34 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:23.736 18:11:34 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[fpi]="0"' 00:13:23.736 18:11:34 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[fpi]=0 00:13:23.736 18:11:34 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:23.736 18:11:34 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:23.736 18:11:34 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:13:23.736 18:11:34 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[dlfeat]="1"' 00:13:23.736 18:11:34 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[dlfeat]=1 00:13:23.736 18:11:34 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:23.736 18:11:34 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:23.737 18:11:34 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:23.737 18:11:34 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[nawun]="0"' 00:13:23.737 18:11:34 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[nawun]=0 00:13:23.737 18:11:34 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:23.737 18:11:34 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:23.737 18:11:34 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:23.737 18:11:34 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[nawupf]="0"' 00:13:23.737 18:11:34 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[nawupf]=0 00:13:23.737 18:11:34 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:23.737 18:11:34 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:23.737 18:11:34 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:23.737 18:11:34 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[nacwu]="0"' 00:13:23.737 18:11:34 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[nacwu]=0 00:13:23.737 18:11:34 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:23.737 18:11:34 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:23.737 18:11:34 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:23.737 18:11:34 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[nabsn]="0"' 00:13:23.737 18:11:34 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[nabsn]=0 00:13:23.737 18:11:34 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:23.737 18:11:34 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:23.737 18:11:34 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:23.737 18:11:34 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[nabo]="0"' 00:13:23.737 18:11:34 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[nabo]=0 00:13:23.737 18:11:34 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:23.737 18:11:34 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:23.737 18:11:34 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:23.737 18:11:34 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[nabspf]="0"' 00:13:23.737 18:11:34 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[nabspf]=0 00:13:23.737 18:11:34 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:23.737 18:11:34 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:23.737 18:11:34 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:23.737 18:11:34 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[noiob]="0"' 00:13:23.737 18:11:34 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[noiob]=0 00:13:23.737 18:11:34 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:23.737 18:11:34 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:23.737 18:11:34 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:23.737 18:11:34 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[nvmcap]="0"' 00:13:23.737 18:11:34 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[nvmcap]=0 00:13:23.737 18:11:34 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:23.737 18:11:34 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:23.737 18:11:34 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:23.737 18:11:34 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[npwg]="0"' 00:13:23.737 18:11:34 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[npwg]=0 00:13:23.737 18:11:34 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:23.737 18:11:34 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:23.737 18:11:34 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:23.737 18:11:34 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[npwa]="0"' 00:13:23.737 18:11:34 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[npwa]=0 00:13:23.737 18:11:34 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:23.737 18:11:34 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:23.737 18:11:34 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:23.737 18:11:34 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[npdg]="0"' 00:13:23.737 18:11:34 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[npdg]=0 00:13:23.737 18:11:34 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:23.737 18:11:34 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:23.737 18:11:34 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:23.737 18:11:34 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[npda]="0"' 00:13:23.737 18:11:34 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[npda]=0 00:13:23.737 18:11:34 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:23.737 18:11:34 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:23.737 18:11:34 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:23.737 18:11:34 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[nows]="0"' 00:13:23.737 18:11:34 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[nows]=0 00:13:23.737 18:11:34 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:23.737 18:11:34 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:23.737 18:11:34 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:13:23.737 18:11:34 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[mssrl]="128"' 00:13:23.737 18:11:34 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[mssrl]=128 00:13:23.737 18:11:34 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:23.737 18:11:34 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:23.737 18:11:34 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:13:23.737 18:11:34 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[mcl]="128"' 00:13:23.737 18:11:34 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[mcl]=128 00:13:23.737 18:11:34 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:23.737 18:11:34 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:23.737 18:11:34 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 127 ]] 00:13:23.737 18:11:34 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[msrc]="127"' 00:13:23.737 18:11:34 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[msrc]=127 00:13:23.737 18:11:34 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:23.737 18:11:34 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:23.737 18:11:34 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:23.737 18:11:34 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[nulbaf]="0"' 00:13:23.737 18:11:34 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[nulbaf]=0 00:13:23.737 18:11:34 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:23.737 18:11:34 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:23.737 18:11:34 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:23.737 18:11:34 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[anagrpid]="0"' 00:13:23.737 18:11:34 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[anagrpid]=0 00:13:23.737 18:11:34 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:23.737 18:11:34 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:23.737 18:11:34 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:23.737 18:11:34 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[nsattr]="0"' 00:13:23.737 18:11:34 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[nsattr]=0 00:13:23.737 18:11:34 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:23.737 18:11:34 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:23.737 18:11:34 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:23.737 18:11:34 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[nvmsetid]="0"' 00:13:23.737 18:11:34 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[nvmsetid]=0 00:13:23.737 18:11:34 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:23.737 18:11:34 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:23.737 18:11:34 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:23.737 18:11:34 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[endgid]="0"' 00:13:23.737 18:11:34 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[endgid]=0 00:13:23.737 18:11:34 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:23.737 18:11:34 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:23.737 18:11:34 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 00000000000000000000000000000000 ]] 00:13:23.737 18:11:34 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[nguid]="00000000000000000000000000000000"' 00:13:23.737 18:11:34 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[nguid]=00000000000000000000000000000000 00:13:23.737 18:11:34 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:23.737 18:11:34 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:23.737 18:11:34 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0000000000000000 ]] 00:13:23.737 18:11:34 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[eui64]="0000000000000000"' 00:13:23.737 18:11:34 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[eui64]=0000000000000000 00:13:23.737 18:11:34 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:23.737 18:11:34 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:23.737 18:11:34 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:9 rp:0 ]] 00:13:23.737 18:11:34 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[lbaf0]="ms:0 lbads:9 rp:0 "' 00:13:23.737 18:11:34 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[lbaf0]='ms:0 lbads:9 rp:0 ' 00:13:23.737 18:11:34 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:23.737 18:11:34 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:23.737 18:11:34 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:9 rp:0 ]] 00:13:23.737 18:11:34 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[lbaf1]="ms:8 lbads:9 rp:0 "' 00:13:23.737 18:11:34 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[lbaf1]='ms:8 lbads:9 rp:0 ' 00:13:23.737 18:11:34 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:23.737 18:11:34 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:23.737 18:11:34 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:9 rp:0 ]] 00:13:23.737 18:11:34 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[lbaf2]="ms:16 lbads:9 rp:0 "' 00:13:23.737 18:11:34 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[lbaf2]='ms:16 lbads:9 rp:0 ' 00:13:23.737 18:11:34 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:23.737 18:11:34 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:23.737 18:11:34 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:9 rp:0 ]] 00:13:23.737 18:11:34 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[lbaf3]="ms:64 lbads:9 rp:0 "' 00:13:23.737 18:11:34 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[lbaf3]='ms:64 lbads:9 rp:0 ' 00:13:23.737 18:11:34 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:23.737 18:11:34 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:23.737 18:11:34 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:12 rp:0 ]] 00:13:23.737 18:11:34 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[lbaf4]="ms:0 lbads:12 rp:0 "' 00:13:23.737 18:11:34 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[lbaf4]='ms:0 lbads:12 rp:0 ' 00:13:23.737 18:11:34 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:23.737 18:11:34 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:23.737 18:11:34 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:12 rp:0 ]] 00:13:23.737 18:11:34 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[lbaf5]="ms:8 lbads:12 rp:0 "' 00:13:23.737 18:11:34 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[lbaf5]='ms:8 lbads:12 rp:0 ' 00:13:23.737 18:11:34 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:23.737 18:11:34 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:23.737 18:11:34 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:12 rp:0 ]] 00:13:23.737 18:11:34 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[lbaf6]="ms:16 lbads:12 rp:0 "' 00:13:23.737 18:11:34 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[lbaf6]='ms:16 lbads:12 rp:0 ' 00:13:23.737 18:11:34 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:23.737 18:11:34 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:23.737 18:11:34 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:12 rp:0 (in use) ]] 00:13:23.737 18:11:34 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[lbaf7]="ms:64 lbads:12 rp:0 (in use)"' 00:13:23.738 18:11:34 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[lbaf7]='ms:64 lbads:12 rp:0 (in use)' 00:13:23.738 18:11:34 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:23.738 18:11:34 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:23.738 18:11:34 nvme_scc -- nvme/functions.sh@58 -- # _ctrl_ns[${ns##*n}]=nvme1n1 00:13:23.738 18:11:34 nvme_scc -- nvme/functions.sh@60 -- # ctrls["$ctrl_dev"]=nvme1 00:13:23.738 18:11:34 nvme_scc -- nvme/functions.sh@61 -- # nvmes["$ctrl_dev"]=nvme1_ns 00:13:23.738 18:11:34 nvme_scc -- nvme/functions.sh@62 -- # bdfs["$ctrl_dev"]=0000:00:10.0 00:13:23.738 18:11:34 nvme_scc -- nvme/functions.sh@63 -- # ordered_ctrls[${ctrl_dev/nvme/}]=nvme1 00:13:23.738 18:11:34 nvme_scc -- nvme/functions.sh@47 -- # for ctrl in /sys/class/nvme/nvme* 00:13:23.738 18:11:34 nvme_scc -- nvme/functions.sh@48 -- # [[ -e /sys/class/nvme/nvme2 ]] 00:13:23.738 18:11:34 nvme_scc -- nvme/functions.sh@49 -- # pci=0000:00:12.0 00:13:23.738 18:11:34 nvme_scc -- nvme/functions.sh@50 -- # pci_can_use 0000:00:12.0 00:13:23.738 18:11:34 nvme_scc -- scripts/common.sh@18 -- # local i 00:13:23.738 18:11:34 nvme_scc -- scripts/common.sh@21 -- # [[ =~ 0000:00:12.0 ]] 00:13:23.738 18:11:34 nvme_scc -- scripts/common.sh@25 -- # [[ -z '' ]] 00:13:23.738 18:11:34 nvme_scc -- scripts/common.sh@27 -- # return 0 00:13:23.738 18:11:34 nvme_scc -- nvme/functions.sh@51 -- # ctrl_dev=nvme2 00:13:23.738 18:11:34 nvme_scc -- nvme/functions.sh@52 -- # nvme_get nvme2 id-ctrl /dev/nvme2 00:13:23.738 18:11:34 nvme_scc -- nvme/functions.sh@17 -- # local ref=nvme2 reg val 00:13:23.738 18:11:34 nvme_scc -- nvme/functions.sh@18 -- # shift 00:13:23.738 18:11:34 nvme_scc -- nvme/functions.sh@20 -- # local -gA 'nvme2=()' 00:13:23.738 18:11:34 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:23.738 18:11:34 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:23.738 18:11:34 nvme_scc -- nvme/functions.sh@16 -- # /usr/local/src/nvme-cli/nvme id-ctrl /dev/nvme2 00:13:23.738 18:11:34 nvme_scc -- nvme/functions.sh@22 -- # [[ -n '' ]] 00:13:23.738 18:11:34 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:23.738 18:11:34 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:23.738 18:11:34 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x1b36 ]] 00:13:23.738 18:11:34 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[vid]="0x1b36"' 00:13:23.738 18:11:34 nvme_scc -- nvme/functions.sh@23 -- # nvme2[vid]=0x1b36 00:13:23.738 18:11:34 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:23.738 18:11:34 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:23.738 18:11:34 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x1af4 ]] 00:13:23.738 18:11:34 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[ssvid]="0x1af4"' 00:13:23.738 18:11:34 nvme_scc -- nvme/functions.sh@23 -- # nvme2[ssvid]=0x1af4 00:13:23.738 18:11:34 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:23.738 18:11:34 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:23.738 18:11:34 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 12342 ]] 00:13:23.738 18:11:34 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[sn]="12342 "' 00:13:23.738 18:11:34 nvme_scc -- nvme/functions.sh@23 -- # nvme2[sn]='12342 ' 00:13:23.738 18:11:34 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:23.738 18:11:34 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:23.738 18:11:34 nvme_scc -- nvme/functions.sh@22 -- # [[ -n QEMU NVMe Ctrl ]] 00:13:23.738 18:11:34 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[mn]="QEMU NVMe Ctrl "' 00:13:23.738 18:11:34 nvme_scc -- nvme/functions.sh@23 -- # nvme2[mn]='QEMU NVMe Ctrl ' 00:13:23.738 18:11:34 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:23.738 18:11:34 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:23.738 18:11:34 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 8.0.0 ]] 00:13:23.738 18:11:34 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[fr]="8.0.0 "' 00:13:23.738 18:11:34 nvme_scc -- nvme/functions.sh@23 -- # nvme2[fr]='8.0.0 ' 00:13:23.738 18:11:34 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:23.738 18:11:34 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:23.738 18:11:34 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 6 ]] 00:13:23.738 18:11:34 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[rab]="6"' 00:13:23.738 18:11:34 nvme_scc -- nvme/functions.sh@23 -- # nvme2[rab]=6 00:13:23.738 18:11:34 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:23.738 18:11:34 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:23.738 18:11:34 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 525400 ]] 00:13:23.738 18:11:34 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[ieee]="525400"' 00:13:23.738 18:11:34 nvme_scc -- nvme/functions.sh@23 -- # nvme2[ieee]=525400 00:13:23.738 18:11:34 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:23.738 18:11:34 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:23.738 18:11:34 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:23.738 18:11:34 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[cmic]="0"' 00:13:23.738 18:11:34 nvme_scc -- nvme/functions.sh@23 -- # nvme2[cmic]=0 00:13:23.738 18:11:34 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:23.738 18:11:34 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:23.738 18:11:34 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 7 ]] 00:13:23.738 18:11:34 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[mdts]="7"' 00:13:23.738 18:11:34 nvme_scc -- nvme/functions.sh@23 -- # nvme2[mdts]=7 00:13:23.738 18:11:34 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:23.738 18:11:34 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:23.738 18:11:34 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:23.738 18:11:34 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[cntlid]="0"' 00:13:23.738 18:11:34 nvme_scc -- nvme/functions.sh@23 -- # nvme2[cntlid]=0 00:13:23.738 18:11:34 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:23.738 18:11:34 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:23.738 18:11:34 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x10400 ]] 00:13:23.738 18:11:34 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[ver]="0x10400"' 00:13:23.738 18:11:34 nvme_scc -- nvme/functions.sh@23 -- # nvme2[ver]=0x10400 00:13:23.738 18:11:34 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:23.738 18:11:34 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:23.738 18:11:34 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:23.738 18:11:34 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[rtd3r]="0"' 00:13:23.738 18:11:34 nvme_scc -- nvme/functions.sh@23 -- # nvme2[rtd3r]=0 00:13:23.738 18:11:34 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:23.738 18:11:34 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:23.738 18:11:34 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:23.738 18:11:34 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[rtd3e]="0"' 00:13:23.738 18:11:34 nvme_scc -- nvme/functions.sh@23 -- # nvme2[rtd3e]=0 00:13:23.738 18:11:34 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:23.738 18:11:34 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:23.738 18:11:34 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x100 ]] 00:13:23.738 18:11:34 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[oaes]="0x100"' 00:13:23.738 18:11:34 nvme_scc -- nvme/functions.sh@23 -- # nvme2[oaes]=0x100 00:13:23.738 18:11:34 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:23.738 18:11:34 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:23.738 18:11:34 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x8000 ]] 00:13:23.738 18:11:34 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[ctratt]="0x8000"' 00:13:23.738 18:11:34 nvme_scc -- nvme/functions.sh@23 -- # nvme2[ctratt]=0x8000 00:13:23.738 18:11:34 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:23.738 18:11:34 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:23.738 18:11:34 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:23.738 18:11:34 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[rrls]="0"' 00:13:23.738 18:11:34 nvme_scc -- nvme/functions.sh@23 -- # nvme2[rrls]=0 00:13:23.738 18:11:34 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:23.738 18:11:34 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:23.738 18:11:34 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:13:23.738 18:11:34 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[cntrltype]="1"' 00:13:23.738 18:11:34 nvme_scc -- nvme/functions.sh@23 -- # nvme2[cntrltype]=1 00:13:23.738 18:11:34 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:23.738 18:11:34 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:23.738 18:11:34 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 00000000-0000-0000-0000-000000000000 ]] 00:13:23.738 18:11:34 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[fguid]="00000000-0000-0000-0000-000000000000"' 00:13:23.738 18:11:34 nvme_scc -- nvme/functions.sh@23 -- # nvme2[fguid]=00000000-0000-0000-0000-000000000000 00:13:23.738 18:11:34 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:23.738 18:11:34 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:23.738 18:11:34 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:23.738 18:11:34 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[crdt1]="0"' 00:13:23.738 18:11:34 nvme_scc -- nvme/functions.sh@23 -- # nvme2[crdt1]=0 00:13:23.738 18:11:34 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:23.738 18:11:34 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:23.738 18:11:34 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:23.738 18:11:34 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[crdt2]="0"' 00:13:23.738 18:11:34 nvme_scc -- nvme/functions.sh@23 -- # nvme2[crdt2]=0 00:13:23.738 18:11:34 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:23.738 18:11:34 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:23.738 18:11:34 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:23.738 18:11:34 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[crdt3]="0"' 00:13:23.738 18:11:34 nvme_scc -- nvme/functions.sh@23 -- # nvme2[crdt3]=0 00:13:23.738 18:11:34 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:23.738 18:11:34 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:23.738 18:11:34 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:23.738 18:11:34 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[nvmsr]="0"' 00:13:23.738 18:11:34 nvme_scc -- nvme/functions.sh@23 -- # nvme2[nvmsr]=0 00:13:23.738 18:11:34 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:23.738 18:11:34 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:23.738 18:11:34 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:23.738 18:11:34 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[vwci]="0"' 00:13:23.738 18:11:34 nvme_scc -- nvme/functions.sh@23 -- # nvme2[vwci]=0 00:13:23.738 18:11:34 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:23.738 18:11:34 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:23.738 18:11:34 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:23.738 18:11:34 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[mec]="0"' 00:13:23.738 18:11:34 nvme_scc -- nvme/functions.sh@23 -- # nvme2[mec]=0 00:13:23.738 18:11:34 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:23.738 18:11:34 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:23.738 18:11:34 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x12a ]] 00:13:23.738 18:11:34 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[oacs]="0x12a"' 00:13:23.738 18:11:34 nvme_scc -- nvme/functions.sh@23 -- # nvme2[oacs]=0x12a 00:13:23.738 18:11:34 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:23.738 18:11:34 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:23.739 18:11:34 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 3 ]] 00:13:23.739 18:11:34 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[acl]="3"' 00:13:23.739 18:11:34 nvme_scc -- nvme/functions.sh@23 -- # nvme2[acl]=3 00:13:23.739 18:11:34 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:23.739 18:11:34 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:23.739 18:11:34 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 3 ]] 00:13:23.739 18:11:34 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[aerl]="3"' 00:13:23.739 18:11:34 nvme_scc -- nvme/functions.sh@23 -- # nvme2[aerl]=3 00:13:23.739 18:11:34 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:23.739 18:11:34 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:23.739 18:11:34 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:13:23.739 18:11:34 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[frmw]="0x3"' 00:13:23.739 18:11:34 nvme_scc -- nvme/functions.sh@23 -- # nvme2[frmw]=0x3 00:13:23.739 18:11:34 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:23.739 18:11:34 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:23.739 18:11:34 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x7 ]] 00:13:23.739 18:11:34 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[lpa]="0x7"' 00:13:23.739 18:11:34 nvme_scc -- nvme/functions.sh@23 -- # nvme2[lpa]=0x7 00:13:23.739 18:11:34 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:23.739 18:11:34 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:23.739 18:11:34 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:23.739 18:11:34 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[elpe]="0"' 00:13:23.739 18:11:34 nvme_scc -- nvme/functions.sh@23 -- # nvme2[elpe]=0 00:13:23.739 18:11:34 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:23.739 18:11:34 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:23.739 18:11:34 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:23.739 18:11:34 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[npss]="0"' 00:13:23.739 18:11:34 nvme_scc -- nvme/functions.sh@23 -- # nvme2[npss]=0 00:13:23.739 18:11:34 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:23.739 18:11:34 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:23.739 18:11:34 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:23.739 18:11:34 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[avscc]="0"' 00:13:23.739 18:11:34 nvme_scc -- nvme/functions.sh@23 -- # nvme2[avscc]=0 00:13:23.739 18:11:34 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:23.739 18:11:34 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:23.739 18:11:34 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:23.739 18:11:34 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[apsta]="0"' 00:13:23.739 18:11:34 nvme_scc -- nvme/functions.sh@23 -- # nvme2[apsta]=0 00:13:23.739 18:11:34 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:23.739 18:11:34 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:23.739 18:11:34 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 343 ]] 00:13:23.739 18:11:34 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[wctemp]="343"' 00:13:23.739 18:11:34 nvme_scc -- nvme/functions.sh@23 -- # nvme2[wctemp]=343 00:13:23.739 18:11:34 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:23.739 18:11:34 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:23.739 18:11:34 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 373 ]] 00:13:23.739 18:11:34 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[cctemp]="373"' 00:13:23.739 18:11:34 nvme_scc -- nvme/functions.sh@23 -- # nvme2[cctemp]=373 00:13:23.739 18:11:34 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:23.739 18:11:34 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:24.006 18:11:34 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:24.006 18:11:34 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[mtfa]="0"' 00:13:24.006 18:11:34 nvme_scc -- nvme/functions.sh@23 -- # nvme2[mtfa]=0 00:13:24.006 18:11:34 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:24.006 18:11:34 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:24.006 18:11:34 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:24.006 18:11:34 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[hmpre]="0"' 00:13:24.006 18:11:34 nvme_scc -- nvme/functions.sh@23 -- # nvme2[hmpre]=0 00:13:24.006 18:11:34 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:24.006 18:11:34 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:24.006 18:11:34 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:24.006 18:11:34 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[hmmin]="0"' 00:13:24.006 18:11:34 nvme_scc -- nvme/functions.sh@23 -- # nvme2[hmmin]=0 00:13:24.006 18:11:34 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:24.006 18:11:34 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:24.006 18:11:34 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:24.006 18:11:34 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[tnvmcap]="0"' 00:13:24.006 18:11:34 nvme_scc -- nvme/functions.sh@23 -- # nvme2[tnvmcap]=0 00:13:24.006 18:11:34 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:24.006 18:11:34 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:24.006 18:11:34 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:24.006 18:11:34 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[unvmcap]="0"' 00:13:24.006 18:11:34 nvme_scc -- nvme/functions.sh@23 -- # nvme2[unvmcap]=0 00:13:24.006 18:11:34 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:24.006 18:11:34 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:24.006 18:11:34 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:24.006 18:11:34 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[rpmbs]="0"' 00:13:24.006 18:11:34 nvme_scc -- nvme/functions.sh@23 -- # nvme2[rpmbs]=0 00:13:24.006 18:11:34 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:24.006 18:11:34 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:24.006 18:11:34 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:24.006 18:11:34 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[edstt]="0"' 00:13:24.006 18:11:34 nvme_scc -- nvme/functions.sh@23 -- # nvme2[edstt]=0 00:13:24.006 18:11:34 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:24.006 18:11:34 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:24.006 18:11:34 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:24.006 18:11:34 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[dsto]="0"' 00:13:24.006 18:11:34 nvme_scc -- nvme/functions.sh@23 -- # nvme2[dsto]=0 00:13:24.006 18:11:34 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:24.006 18:11:34 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:24.006 18:11:34 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:24.006 18:11:34 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[fwug]="0"' 00:13:24.006 18:11:34 nvme_scc -- nvme/functions.sh@23 -- # nvme2[fwug]=0 00:13:24.006 18:11:34 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:24.006 18:11:34 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:24.006 18:11:34 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:24.006 18:11:34 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[kas]="0"' 00:13:24.006 18:11:34 nvme_scc -- nvme/functions.sh@23 -- # nvme2[kas]=0 00:13:24.006 18:11:34 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:24.006 18:11:34 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:24.006 18:11:34 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:24.006 18:11:34 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[hctma]="0"' 00:13:24.006 18:11:34 nvme_scc -- nvme/functions.sh@23 -- # nvme2[hctma]=0 00:13:24.006 18:11:34 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:24.006 18:11:34 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:24.006 18:11:34 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:24.006 18:11:34 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[mntmt]="0"' 00:13:24.006 18:11:34 nvme_scc -- nvme/functions.sh@23 -- # nvme2[mntmt]=0 00:13:24.006 18:11:34 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:24.006 18:11:34 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:24.006 18:11:34 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:24.006 18:11:34 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[mxtmt]="0"' 00:13:24.006 18:11:34 nvme_scc -- nvme/functions.sh@23 -- # nvme2[mxtmt]=0 00:13:24.006 18:11:34 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:24.006 18:11:34 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:24.006 18:11:34 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:24.006 18:11:34 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[sanicap]="0"' 00:13:24.006 18:11:34 nvme_scc -- nvme/functions.sh@23 -- # nvme2[sanicap]=0 00:13:24.006 18:11:34 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:24.006 18:11:34 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:24.006 18:11:34 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:24.006 18:11:34 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[hmminds]="0"' 00:13:24.006 18:11:34 nvme_scc -- nvme/functions.sh@23 -- # nvme2[hmminds]=0 00:13:24.006 18:11:34 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:24.006 18:11:34 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:24.006 18:11:34 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:24.006 18:11:34 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[hmmaxd]="0"' 00:13:24.006 18:11:34 nvme_scc -- nvme/functions.sh@23 -- # nvme2[hmmaxd]=0 00:13:24.006 18:11:34 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:24.006 18:11:34 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:24.006 18:11:34 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:24.006 18:11:34 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[nsetidmax]="0"' 00:13:24.006 18:11:34 nvme_scc -- nvme/functions.sh@23 -- # nvme2[nsetidmax]=0 00:13:24.006 18:11:34 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:24.006 18:11:34 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:24.006 18:11:34 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:24.006 18:11:34 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[endgidmax]="0"' 00:13:24.006 18:11:34 nvme_scc -- nvme/functions.sh@23 -- # nvme2[endgidmax]=0 00:13:24.006 18:11:34 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:24.006 18:11:34 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:24.006 18:11:34 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:24.006 18:11:34 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[anatt]="0"' 00:13:24.006 18:11:34 nvme_scc -- nvme/functions.sh@23 -- # nvme2[anatt]=0 00:13:24.006 18:11:34 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:24.006 18:11:34 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:24.006 18:11:34 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:24.006 18:11:34 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[anacap]="0"' 00:13:24.007 18:11:34 nvme_scc -- nvme/functions.sh@23 -- # nvme2[anacap]=0 00:13:24.007 18:11:34 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:24.007 18:11:34 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:24.007 18:11:34 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:24.007 18:11:34 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[anagrpmax]="0"' 00:13:24.007 18:11:34 nvme_scc -- nvme/functions.sh@23 -- # nvme2[anagrpmax]=0 00:13:24.007 18:11:34 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:24.007 18:11:34 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:24.007 18:11:34 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:24.007 18:11:34 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[nanagrpid]="0"' 00:13:24.007 18:11:34 nvme_scc -- nvme/functions.sh@23 -- # nvme2[nanagrpid]=0 00:13:24.007 18:11:34 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:24.007 18:11:34 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:24.007 18:11:34 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:24.007 18:11:34 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[pels]="0"' 00:13:24.007 18:11:34 nvme_scc -- nvme/functions.sh@23 -- # nvme2[pels]=0 00:13:24.007 18:11:34 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:24.007 18:11:34 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:24.007 18:11:34 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:24.007 18:11:34 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[domainid]="0"' 00:13:24.007 18:11:34 nvme_scc -- nvme/functions.sh@23 -- # nvme2[domainid]=0 00:13:24.007 18:11:34 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:24.007 18:11:34 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:24.007 18:11:34 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:24.007 18:11:34 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[megcap]="0"' 00:13:24.007 18:11:34 nvme_scc -- nvme/functions.sh@23 -- # nvme2[megcap]=0 00:13:24.007 18:11:34 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:24.007 18:11:34 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:24.007 18:11:34 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x66 ]] 00:13:24.007 18:11:34 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[sqes]="0x66"' 00:13:24.007 18:11:34 nvme_scc -- nvme/functions.sh@23 -- # nvme2[sqes]=0x66 00:13:24.007 18:11:34 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:24.007 18:11:34 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:24.007 18:11:34 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x44 ]] 00:13:24.007 18:11:34 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[cqes]="0x44"' 00:13:24.007 18:11:34 nvme_scc -- nvme/functions.sh@23 -- # nvme2[cqes]=0x44 00:13:24.007 18:11:34 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:24.007 18:11:34 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:24.007 18:11:34 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:24.007 18:11:34 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[maxcmd]="0"' 00:13:24.007 18:11:34 nvme_scc -- nvme/functions.sh@23 -- # nvme2[maxcmd]=0 00:13:24.007 18:11:34 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:24.007 18:11:34 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:24.007 18:11:34 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 256 ]] 00:13:24.007 18:11:34 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[nn]="256"' 00:13:24.007 18:11:34 nvme_scc -- nvme/functions.sh@23 -- # nvme2[nn]=256 00:13:24.007 18:11:34 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:24.007 18:11:34 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:24.007 18:11:34 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x15d ]] 00:13:24.007 18:11:34 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[oncs]="0x15d"' 00:13:24.007 18:11:34 nvme_scc -- nvme/functions.sh@23 -- # nvme2[oncs]=0x15d 00:13:24.007 18:11:34 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:24.007 18:11:34 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:24.007 18:11:34 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:24.007 18:11:34 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[fuses]="0"' 00:13:24.007 18:11:34 nvme_scc -- nvme/functions.sh@23 -- # nvme2[fuses]=0 00:13:24.007 18:11:34 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:24.007 18:11:34 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:24.007 18:11:34 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:24.007 18:11:34 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[fna]="0"' 00:13:24.007 18:11:34 nvme_scc -- nvme/functions.sh@23 -- # nvme2[fna]=0 00:13:24.007 18:11:34 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:24.007 18:11:34 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:24.007 18:11:34 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x7 ]] 00:13:24.007 18:11:34 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[vwc]="0x7"' 00:13:24.007 18:11:34 nvme_scc -- nvme/functions.sh@23 -- # nvme2[vwc]=0x7 00:13:24.007 18:11:34 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:24.007 18:11:34 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:24.007 18:11:34 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:24.007 18:11:34 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[awun]="0"' 00:13:24.007 18:11:34 nvme_scc -- nvme/functions.sh@23 -- # nvme2[awun]=0 00:13:24.007 18:11:34 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:24.007 18:11:34 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:24.007 18:11:34 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:24.007 18:11:34 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[awupf]="0"' 00:13:24.007 18:11:34 nvme_scc -- nvme/functions.sh@23 -- # nvme2[awupf]=0 00:13:24.007 18:11:34 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:24.007 18:11:34 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:24.007 18:11:34 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:24.007 18:11:34 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[icsvscc]="0"' 00:13:24.007 18:11:34 nvme_scc -- nvme/functions.sh@23 -- # nvme2[icsvscc]=0 00:13:24.007 18:11:34 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:24.007 18:11:34 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:24.007 18:11:34 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:24.007 18:11:34 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[nwpc]="0"' 00:13:24.007 18:11:34 nvme_scc -- nvme/functions.sh@23 -- # nvme2[nwpc]=0 00:13:24.007 18:11:34 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:24.007 18:11:34 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:24.007 18:11:34 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:24.007 18:11:34 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[acwu]="0"' 00:13:24.007 18:11:34 nvme_scc -- nvme/functions.sh@23 -- # nvme2[acwu]=0 00:13:24.007 18:11:34 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:24.007 18:11:34 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:24.007 18:11:34 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:13:24.007 18:11:34 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[ocfs]="0x3"' 00:13:24.007 18:11:34 nvme_scc -- nvme/functions.sh@23 -- # nvme2[ocfs]=0x3 00:13:24.007 18:11:34 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:24.007 18:11:34 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:24.007 18:11:34 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x1 ]] 00:13:24.007 18:11:34 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[sgls]="0x1"' 00:13:24.007 18:11:34 nvme_scc -- nvme/functions.sh@23 -- # nvme2[sgls]=0x1 00:13:24.007 18:11:34 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:24.007 18:11:34 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:24.007 18:11:34 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:24.007 18:11:34 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[mnan]="0"' 00:13:24.007 18:11:34 nvme_scc -- nvme/functions.sh@23 -- # nvme2[mnan]=0 00:13:24.007 18:11:34 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:24.007 18:11:34 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:24.007 18:11:34 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:24.007 18:11:34 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[maxdna]="0"' 00:13:24.007 18:11:34 nvme_scc -- nvme/functions.sh@23 -- # nvme2[maxdna]=0 00:13:24.007 18:11:34 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:24.007 18:11:34 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:24.007 18:11:34 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:24.007 18:11:34 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[maxcna]="0"' 00:13:24.007 18:11:34 nvme_scc -- nvme/functions.sh@23 -- # nvme2[maxcna]=0 00:13:24.007 18:11:34 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:24.007 18:11:34 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:24.007 18:11:34 nvme_scc -- nvme/functions.sh@22 -- # [[ -n nqn.2019-08.org.qemu:12342 ]] 00:13:24.007 18:11:34 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[subnqn]="nqn.2019-08.org.qemu:12342"' 00:13:24.007 18:11:34 nvme_scc -- nvme/functions.sh@23 -- # nvme2[subnqn]=nqn.2019-08.org.qemu:12342 00:13:24.007 18:11:34 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:24.007 18:11:34 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:24.007 18:11:34 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:24.007 18:11:34 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[ioccsz]="0"' 00:13:24.007 18:11:34 nvme_scc -- nvme/functions.sh@23 -- # nvme2[ioccsz]=0 00:13:24.007 18:11:34 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:24.007 18:11:34 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:24.007 18:11:34 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:24.007 18:11:34 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[iorcsz]="0"' 00:13:24.007 18:11:34 nvme_scc -- nvme/functions.sh@23 -- # nvme2[iorcsz]=0 00:13:24.007 18:11:34 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:24.007 18:11:34 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:24.007 18:11:34 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:24.007 18:11:34 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[icdoff]="0"' 00:13:24.007 18:11:34 nvme_scc -- nvme/functions.sh@23 -- # nvme2[icdoff]=0 00:13:24.007 18:11:34 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:24.007 18:11:34 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:24.007 18:11:34 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:24.007 18:11:34 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[fcatt]="0"' 00:13:24.007 18:11:34 nvme_scc -- nvme/functions.sh@23 -- # nvme2[fcatt]=0 00:13:24.007 18:11:34 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:24.007 18:11:34 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:24.007 18:11:34 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:24.007 18:11:34 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[msdbd]="0"' 00:13:24.007 18:11:34 nvme_scc -- nvme/functions.sh@23 -- # nvme2[msdbd]=0 00:13:24.007 18:11:34 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:24.007 18:11:34 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:24.007 18:11:34 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:24.007 18:11:34 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[ofcs]="0"' 00:13:24.007 18:11:34 nvme_scc -- nvme/functions.sh@23 -- # nvme2[ofcs]=0 00:13:24.007 18:11:34 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:24.007 18:11:34 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:24.007 18:11:34 nvme_scc -- nvme/functions.sh@22 -- # [[ -n mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0 ]] 00:13:24.008 18:11:34 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[ps0]="mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0"' 00:13:24.008 18:11:34 nvme_scc -- nvme/functions.sh@23 -- # nvme2[ps0]='mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0' 00:13:24.008 18:11:34 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:24.008 18:11:34 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:24.008 18:11:34 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 rwl:0 idle_power:- active_power:- ]] 00:13:24.008 18:11:34 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[rwt]="0 rwl:0 idle_power:- active_power:-"' 00:13:24.008 18:11:34 nvme_scc -- nvme/functions.sh@23 -- # nvme2[rwt]='0 rwl:0 idle_power:- active_power:-' 00:13:24.008 18:11:34 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:24.008 18:11:34 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:24.008 18:11:34 nvme_scc -- nvme/functions.sh@22 -- # [[ -n - ]] 00:13:24.008 18:11:34 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[active_power_workload]="-"' 00:13:24.008 18:11:34 nvme_scc -- nvme/functions.sh@23 -- # nvme2[active_power_workload]=- 00:13:24.008 18:11:34 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:24.008 18:11:34 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:24.008 18:11:34 nvme_scc -- nvme/functions.sh@53 -- # local -n _ctrl_ns=nvme2_ns 00:13:24.008 18:11:34 nvme_scc -- nvme/functions.sh@54 -- # for ns in "$ctrl/"@("ng${ctrl##*nvme}"|"${ctrl##*/}n")* 00:13:24.008 18:11:34 nvme_scc -- nvme/functions.sh@55 -- # [[ -e /sys/class/nvme/nvme2/ng2n1 ]] 00:13:24.008 18:11:34 nvme_scc -- nvme/functions.sh@56 -- # ns_dev=ng2n1 00:13:24.008 18:11:34 nvme_scc -- nvme/functions.sh@57 -- # nvme_get ng2n1 id-ns /dev/ng2n1 00:13:24.008 18:11:34 nvme_scc -- nvme/functions.sh@17 -- # local ref=ng2n1 reg val 00:13:24.008 18:11:34 nvme_scc -- nvme/functions.sh@18 -- # shift 00:13:24.008 18:11:34 nvme_scc -- nvme/functions.sh@20 -- # local -gA 'ng2n1=()' 00:13:24.008 18:11:34 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:24.008 18:11:34 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:24.008 18:11:34 nvme_scc -- nvme/functions.sh@16 -- # /usr/local/src/nvme-cli/nvme id-ns /dev/ng2n1 00:13:24.008 18:11:34 nvme_scc -- nvme/functions.sh@22 -- # [[ -n '' ]] 00:13:24.008 18:11:34 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:24.008 18:11:34 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:24.008 18:11:34 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:13:24.008 18:11:34 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n1[nsze]="0x100000"' 00:13:24.008 18:11:34 nvme_scc -- nvme/functions.sh@23 -- # ng2n1[nsze]=0x100000 00:13:24.008 18:11:34 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:24.008 18:11:34 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:24.008 18:11:34 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:13:24.008 18:11:34 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n1[ncap]="0x100000"' 00:13:24.008 18:11:34 nvme_scc -- nvme/functions.sh@23 -- # ng2n1[ncap]=0x100000 00:13:24.008 18:11:34 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:24.008 18:11:34 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:24.008 18:11:34 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:13:24.008 18:11:34 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n1[nuse]="0x100000"' 00:13:24.008 18:11:34 nvme_scc -- nvme/functions.sh@23 -- # ng2n1[nuse]=0x100000 00:13:24.008 18:11:34 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:24.008 18:11:34 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:24.008 18:11:34 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x14 ]] 00:13:24.008 18:11:34 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n1[nsfeat]="0x14"' 00:13:24.008 18:11:34 nvme_scc -- nvme/functions.sh@23 -- # ng2n1[nsfeat]=0x14 00:13:24.008 18:11:34 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:24.008 18:11:34 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:24.008 18:11:34 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 7 ]] 00:13:24.008 18:11:34 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n1[nlbaf]="7"' 00:13:24.008 18:11:34 nvme_scc -- nvme/functions.sh@23 -- # ng2n1[nlbaf]=7 00:13:24.008 18:11:34 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:24.008 18:11:34 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:24.008 18:11:34 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x4 ]] 00:13:24.008 18:11:34 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n1[flbas]="0x4"' 00:13:24.008 18:11:34 nvme_scc -- nvme/functions.sh@23 -- # ng2n1[flbas]=0x4 00:13:24.008 18:11:34 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:24.008 18:11:34 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:24.008 18:11:34 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:13:24.008 18:11:34 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n1[mc]="0x3"' 00:13:24.008 18:11:34 nvme_scc -- nvme/functions.sh@23 -- # ng2n1[mc]=0x3 00:13:24.008 18:11:34 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:24.008 18:11:34 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:24.008 18:11:34 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x1f ]] 00:13:24.008 18:11:34 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n1[dpc]="0x1f"' 00:13:24.008 18:11:34 nvme_scc -- nvme/functions.sh@23 -- # ng2n1[dpc]=0x1f 00:13:24.008 18:11:34 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:24.008 18:11:34 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:24.008 18:11:34 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:24.008 18:11:34 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n1[dps]="0"' 00:13:24.008 18:11:34 nvme_scc -- nvme/functions.sh@23 -- # ng2n1[dps]=0 00:13:24.008 18:11:34 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:24.008 18:11:34 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:24.008 18:11:34 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:24.008 18:11:34 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n1[nmic]="0"' 00:13:24.008 18:11:34 nvme_scc -- nvme/functions.sh@23 -- # ng2n1[nmic]=0 00:13:24.008 18:11:34 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:24.008 18:11:34 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:24.008 18:11:34 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:24.008 18:11:34 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n1[rescap]="0"' 00:13:24.008 18:11:34 nvme_scc -- nvme/functions.sh@23 -- # ng2n1[rescap]=0 00:13:24.008 18:11:34 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:24.008 18:11:34 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:24.008 18:11:34 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:24.008 18:11:34 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n1[fpi]="0"' 00:13:24.008 18:11:34 nvme_scc -- nvme/functions.sh@23 -- # ng2n1[fpi]=0 00:13:24.008 18:11:34 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:24.008 18:11:34 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:24.008 18:11:34 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:13:24.008 18:11:34 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n1[dlfeat]="1"' 00:13:24.008 18:11:34 nvme_scc -- nvme/functions.sh@23 -- # ng2n1[dlfeat]=1 00:13:24.008 18:11:34 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:24.008 18:11:34 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:24.008 18:11:34 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:24.008 18:11:34 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n1[nawun]="0"' 00:13:24.008 18:11:34 nvme_scc -- nvme/functions.sh@23 -- # ng2n1[nawun]=0 00:13:24.008 18:11:34 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:24.008 18:11:34 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:24.008 18:11:34 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:24.008 18:11:34 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n1[nawupf]="0"' 00:13:24.008 18:11:34 nvme_scc -- nvme/functions.sh@23 -- # ng2n1[nawupf]=0 00:13:24.008 18:11:34 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:24.008 18:11:34 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:24.008 18:11:34 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:24.008 18:11:34 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n1[nacwu]="0"' 00:13:24.008 18:11:34 nvme_scc -- nvme/functions.sh@23 -- # ng2n1[nacwu]=0 00:13:24.008 18:11:34 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:24.008 18:11:34 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:24.008 18:11:34 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:24.008 18:11:34 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n1[nabsn]="0"' 00:13:24.008 18:11:34 nvme_scc -- nvme/functions.sh@23 -- # ng2n1[nabsn]=0 00:13:24.008 18:11:34 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:24.008 18:11:34 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:24.008 18:11:34 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:24.008 18:11:34 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n1[nabo]="0"' 00:13:24.008 18:11:34 nvme_scc -- nvme/functions.sh@23 -- # ng2n1[nabo]=0 00:13:24.008 18:11:34 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:24.008 18:11:34 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:24.008 18:11:34 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:24.008 18:11:34 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n1[nabspf]="0"' 00:13:24.008 18:11:34 nvme_scc -- nvme/functions.sh@23 -- # ng2n1[nabspf]=0 00:13:24.008 18:11:34 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:24.008 18:11:34 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:24.008 18:11:34 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:24.008 18:11:34 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n1[noiob]="0"' 00:13:24.008 18:11:34 nvme_scc -- nvme/functions.sh@23 -- # ng2n1[noiob]=0 00:13:24.008 18:11:34 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:24.008 18:11:34 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:24.008 18:11:34 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:24.008 18:11:34 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n1[nvmcap]="0"' 00:13:24.008 18:11:34 nvme_scc -- nvme/functions.sh@23 -- # ng2n1[nvmcap]=0 00:13:24.008 18:11:34 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:24.008 18:11:34 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:24.008 18:11:34 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:24.008 18:11:34 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n1[npwg]="0"' 00:13:24.008 18:11:34 nvme_scc -- nvme/functions.sh@23 -- # ng2n1[npwg]=0 00:13:24.008 18:11:34 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:24.008 18:11:34 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:24.008 18:11:34 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:24.008 18:11:34 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n1[npwa]="0"' 00:13:24.008 18:11:34 nvme_scc -- nvme/functions.sh@23 -- # ng2n1[npwa]=0 00:13:24.008 18:11:34 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:24.008 18:11:34 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:24.008 18:11:34 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:24.008 18:11:34 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n1[npdg]="0"' 00:13:24.008 18:11:34 nvme_scc -- nvme/functions.sh@23 -- # ng2n1[npdg]=0 00:13:24.008 18:11:34 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:24.008 18:11:34 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:24.008 18:11:34 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:24.008 18:11:34 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n1[npda]="0"' 00:13:24.008 18:11:34 nvme_scc -- nvme/functions.sh@23 -- # ng2n1[npda]=0 00:13:24.008 18:11:34 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:24.008 18:11:34 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:24.008 18:11:34 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:24.008 18:11:34 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n1[nows]="0"' 00:13:24.008 18:11:34 nvme_scc -- nvme/functions.sh@23 -- # ng2n1[nows]=0 00:13:24.009 18:11:34 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:24.009 18:11:34 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:24.009 18:11:34 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:13:24.009 18:11:34 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n1[mssrl]="128"' 00:13:24.009 18:11:34 nvme_scc -- nvme/functions.sh@23 -- # ng2n1[mssrl]=128 00:13:24.009 18:11:34 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:24.009 18:11:34 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:24.009 18:11:34 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:13:24.009 18:11:34 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n1[mcl]="128"' 00:13:24.009 18:11:34 nvme_scc -- nvme/functions.sh@23 -- # ng2n1[mcl]=128 00:13:24.009 18:11:34 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:24.009 18:11:34 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:24.009 18:11:34 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 127 ]] 00:13:24.009 18:11:34 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n1[msrc]="127"' 00:13:24.009 18:11:34 nvme_scc -- nvme/functions.sh@23 -- # ng2n1[msrc]=127 00:13:24.009 18:11:34 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:24.009 18:11:34 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:24.009 18:11:34 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:24.009 18:11:34 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n1[nulbaf]="0"' 00:13:24.009 18:11:34 nvme_scc -- nvme/functions.sh@23 -- # ng2n1[nulbaf]=0 00:13:24.009 18:11:34 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:24.009 18:11:34 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:24.009 18:11:34 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:24.009 18:11:34 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n1[anagrpid]="0"' 00:13:24.009 18:11:34 nvme_scc -- nvme/functions.sh@23 -- # ng2n1[anagrpid]=0 00:13:24.009 18:11:34 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:24.009 18:11:34 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:24.009 18:11:34 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:24.009 18:11:34 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n1[nsattr]="0"' 00:13:24.009 18:11:34 nvme_scc -- nvme/functions.sh@23 -- # ng2n1[nsattr]=0 00:13:24.009 18:11:34 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:24.009 18:11:34 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:24.009 18:11:34 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:24.009 18:11:34 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n1[nvmsetid]="0"' 00:13:24.009 18:11:34 nvme_scc -- nvme/functions.sh@23 -- # ng2n1[nvmsetid]=0 00:13:24.009 18:11:34 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:24.009 18:11:34 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:24.009 18:11:34 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:24.009 18:11:34 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n1[endgid]="0"' 00:13:24.009 18:11:34 nvme_scc -- nvme/functions.sh@23 -- # ng2n1[endgid]=0 00:13:24.009 18:11:34 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:24.009 18:11:34 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:24.009 18:11:34 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 00000000000000000000000000000000 ]] 00:13:24.009 18:11:34 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n1[nguid]="00000000000000000000000000000000"' 00:13:24.009 18:11:34 nvme_scc -- nvme/functions.sh@23 -- # ng2n1[nguid]=00000000000000000000000000000000 00:13:24.009 18:11:34 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:24.009 18:11:34 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:24.009 18:11:34 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0000000000000000 ]] 00:13:24.009 18:11:34 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n1[eui64]="0000000000000000"' 00:13:24.009 18:11:34 nvme_scc -- nvme/functions.sh@23 -- # ng2n1[eui64]=0000000000000000 00:13:24.009 18:11:34 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:24.009 18:11:34 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:24.009 18:11:34 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:9 rp:0 ]] 00:13:24.009 18:11:34 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n1[lbaf0]="ms:0 lbads:9 rp:0 "' 00:13:24.009 18:11:34 nvme_scc -- nvme/functions.sh@23 -- # ng2n1[lbaf0]='ms:0 lbads:9 rp:0 ' 00:13:24.009 18:11:34 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:24.009 18:11:34 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:24.009 18:11:34 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:9 rp:0 ]] 00:13:24.009 18:11:34 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n1[lbaf1]="ms:8 lbads:9 rp:0 "' 00:13:24.009 18:11:34 nvme_scc -- nvme/functions.sh@23 -- # ng2n1[lbaf1]='ms:8 lbads:9 rp:0 ' 00:13:24.009 18:11:34 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:24.009 18:11:34 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:24.009 18:11:34 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:9 rp:0 ]] 00:13:24.009 18:11:34 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n1[lbaf2]="ms:16 lbads:9 rp:0 "' 00:13:24.009 18:11:34 nvme_scc -- nvme/functions.sh@23 -- # ng2n1[lbaf2]='ms:16 lbads:9 rp:0 ' 00:13:24.009 18:11:34 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:24.009 18:11:34 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:24.009 18:11:34 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:9 rp:0 ]] 00:13:24.009 18:11:34 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n1[lbaf3]="ms:64 lbads:9 rp:0 "' 00:13:24.009 18:11:34 nvme_scc -- nvme/functions.sh@23 -- # ng2n1[lbaf3]='ms:64 lbads:9 rp:0 ' 00:13:24.009 18:11:34 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:24.009 18:11:34 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:24.009 18:11:34 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:12 rp:0 (in use) ]] 00:13:24.009 18:11:34 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n1[lbaf4]="ms:0 lbads:12 rp:0 (in use)"' 00:13:24.009 18:11:34 nvme_scc -- nvme/functions.sh@23 -- # ng2n1[lbaf4]='ms:0 lbads:12 rp:0 (in use)' 00:13:24.009 18:11:34 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:24.009 18:11:34 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:24.009 18:11:34 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:12 rp:0 ]] 00:13:24.009 18:11:34 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n1[lbaf5]="ms:8 lbads:12 rp:0 "' 00:13:24.009 18:11:34 nvme_scc -- nvme/functions.sh@23 -- # ng2n1[lbaf5]='ms:8 lbads:12 rp:0 ' 00:13:24.009 18:11:34 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:24.009 18:11:34 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:24.009 18:11:34 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:12 rp:0 ]] 00:13:24.009 18:11:34 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n1[lbaf6]="ms:16 lbads:12 rp:0 "' 00:13:24.009 18:11:34 nvme_scc -- nvme/functions.sh@23 -- # ng2n1[lbaf6]='ms:16 lbads:12 rp:0 ' 00:13:24.009 18:11:34 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:24.009 18:11:34 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:24.009 18:11:34 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:12 rp:0 ]] 00:13:24.009 18:11:34 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n1[lbaf7]="ms:64 lbads:12 rp:0 "' 00:13:24.009 18:11:34 nvme_scc -- nvme/functions.sh@23 -- # ng2n1[lbaf7]='ms:64 lbads:12 rp:0 ' 00:13:24.009 18:11:34 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:24.009 18:11:34 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:24.009 18:11:34 nvme_scc -- nvme/functions.sh@58 -- # _ctrl_ns[${ns##*n}]=ng2n1 00:13:24.009 18:11:34 nvme_scc -- nvme/functions.sh@54 -- # for ns in "$ctrl/"@("ng${ctrl##*nvme}"|"${ctrl##*/}n")* 00:13:24.009 18:11:34 nvme_scc -- nvme/functions.sh@55 -- # [[ -e /sys/class/nvme/nvme2/ng2n2 ]] 00:13:24.009 18:11:34 nvme_scc -- nvme/functions.sh@56 -- # ns_dev=ng2n2 00:13:24.009 18:11:34 nvme_scc -- nvme/functions.sh@57 -- # nvme_get ng2n2 id-ns /dev/ng2n2 00:13:24.009 18:11:34 nvme_scc -- nvme/functions.sh@17 -- # local ref=ng2n2 reg val 00:13:24.009 18:11:34 nvme_scc -- nvme/functions.sh@18 -- # shift 00:13:24.009 18:11:34 nvme_scc -- nvme/functions.sh@20 -- # local -gA 'ng2n2=()' 00:13:24.009 18:11:34 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:24.009 18:11:34 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:24.009 18:11:34 nvme_scc -- nvme/functions.sh@16 -- # /usr/local/src/nvme-cli/nvme id-ns /dev/ng2n2 00:13:24.009 18:11:34 nvme_scc -- nvme/functions.sh@22 -- # [[ -n '' ]] 00:13:24.009 18:11:34 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:24.009 18:11:34 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:24.009 18:11:34 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:13:24.009 18:11:34 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n2[nsze]="0x100000"' 00:13:24.009 18:11:34 nvme_scc -- nvme/functions.sh@23 -- # ng2n2[nsze]=0x100000 00:13:24.009 18:11:34 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:24.009 18:11:34 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:24.009 18:11:34 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:13:24.009 18:11:34 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n2[ncap]="0x100000"' 00:13:24.009 18:11:34 nvme_scc -- nvme/functions.sh@23 -- # ng2n2[ncap]=0x100000 00:13:24.009 18:11:34 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:24.009 18:11:34 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:24.009 18:11:34 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:13:24.009 18:11:34 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n2[nuse]="0x100000"' 00:13:24.009 18:11:34 nvme_scc -- nvme/functions.sh@23 -- # ng2n2[nuse]=0x100000 00:13:24.009 18:11:34 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:24.009 18:11:34 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:24.009 18:11:34 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x14 ]] 00:13:24.009 18:11:34 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n2[nsfeat]="0x14"' 00:13:24.009 18:11:34 nvme_scc -- nvme/functions.sh@23 -- # ng2n2[nsfeat]=0x14 00:13:24.009 18:11:34 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:24.009 18:11:34 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:24.009 18:11:34 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 7 ]] 00:13:24.009 18:11:34 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n2[nlbaf]="7"' 00:13:24.009 18:11:34 nvme_scc -- nvme/functions.sh@23 -- # ng2n2[nlbaf]=7 00:13:24.009 18:11:34 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:24.009 18:11:34 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:24.009 18:11:34 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x4 ]] 00:13:24.009 18:11:34 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n2[flbas]="0x4"' 00:13:24.009 18:11:34 nvme_scc -- nvme/functions.sh@23 -- # ng2n2[flbas]=0x4 00:13:24.009 18:11:34 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:24.009 18:11:34 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:24.009 18:11:34 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:13:24.009 18:11:34 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n2[mc]="0x3"' 00:13:24.009 18:11:34 nvme_scc -- nvme/functions.sh@23 -- # ng2n2[mc]=0x3 00:13:24.009 18:11:34 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:24.009 18:11:34 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:24.009 18:11:34 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x1f ]] 00:13:24.009 18:11:34 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n2[dpc]="0x1f"' 00:13:24.009 18:11:34 nvme_scc -- nvme/functions.sh@23 -- # ng2n2[dpc]=0x1f 00:13:24.009 18:11:34 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:24.009 18:11:34 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:24.009 18:11:34 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:24.009 18:11:34 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n2[dps]="0"' 00:13:24.009 18:11:34 nvme_scc -- nvme/functions.sh@23 -- # ng2n2[dps]=0 00:13:24.010 18:11:34 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:24.010 18:11:34 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:24.010 18:11:34 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:24.010 18:11:34 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n2[nmic]="0"' 00:13:24.010 18:11:34 nvme_scc -- nvme/functions.sh@23 -- # ng2n2[nmic]=0 00:13:24.010 18:11:34 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:24.010 18:11:34 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:24.010 18:11:34 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:24.010 18:11:34 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n2[rescap]="0"' 00:13:24.010 18:11:34 nvme_scc -- nvme/functions.sh@23 -- # ng2n2[rescap]=0 00:13:24.010 18:11:34 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:24.010 18:11:34 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:24.010 18:11:34 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:24.010 18:11:34 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n2[fpi]="0"' 00:13:24.010 18:11:34 nvme_scc -- nvme/functions.sh@23 -- # ng2n2[fpi]=0 00:13:24.010 18:11:34 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:24.010 18:11:34 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:24.010 18:11:34 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:13:24.010 18:11:34 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n2[dlfeat]="1"' 00:13:24.010 18:11:34 nvme_scc -- nvme/functions.sh@23 -- # ng2n2[dlfeat]=1 00:13:24.010 18:11:34 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:24.010 18:11:34 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:24.010 18:11:34 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:24.010 18:11:34 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n2[nawun]="0"' 00:13:24.010 18:11:34 nvme_scc -- nvme/functions.sh@23 -- # ng2n2[nawun]=0 00:13:24.010 18:11:34 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:24.010 18:11:34 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:24.010 18:11:34 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:24.010 18:11:34 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n2[nawupf]="0"' 00:13:24.010 18:11:34 nvme_scc -- nvme/functions.sh@23 -- # ng2n2[nawupf]=0 00:13:24.010 18:11:34 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:24.010 18:11:34 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:24.010 18:11:34 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:24.010 18:11:34 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n2[nacwu]="0"' 00:13:24.010 18:11:34 nvme_scc -- nvme/functions.sh@23 -- # ng2n2[nacwu]=0 00:13:24.010 18:11:34 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:24.010 18:11:34 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:24.010 18:11:34 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:24.010 18:11:34 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n2[nabsn]="0"' 00:13:24.010 18:11:34 nvme_scc -- nvme/functions.sh@23 -- # ng2n2[nabsn]=0 00:13:24.010 18:11:34 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:24.010 18:11:34 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:24.010 18:11:34 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:24.010 18:11:34 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n2[nabo]="0"' 00:13:24.010 18:11:34 nvme_scc -- nvme/functions.sh@23 -- # ng2n2[nabo]=0 00:13:24.010 18:11:34 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:24.010 18:11:34 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:24.010 18:11:34 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:24.010 18:11:34 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n2[nabspf]="0"' 00:13:24.010 18:11:34 nvme_scc -- nvme/functions.sh@23 -- # ng2n2[nabspf]=0 00:13:24.010 18:11:34 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:24.010 18:11:34 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:24.010 18:11:34 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:24.010 18:11:34 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n2[noiob]="0"' 00:13:24.010 18:11:34 nvme_scc -- nvme/functions.sh@23 -- # ng2n2[noiob]=0 00:13:24.010 18:11:34 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:24.010 18:11:34 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:24.010 18:11:34 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:24.010 18:11:34 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n2[nvmcap]="0"' 00:13:24.010 18:11:34 nvme_scc -- nvme/functions.sh@23 -- # ng2n2[nvmcap]=0 00:13:24.010 18:11:34 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:24.010 18:11:34 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:24.010 18:11:34 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:24.010 18:11:34 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n2[npwg]="0"' 00:13:24.010 18:11:34 nvme_scc -- nvme/functions.sh@23 -- # ng2n2[npwg]=0 00:13:24.010 18:11:34 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:24.010 18:11:34 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:24.010 18:11:34 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:24.010 18:11:34 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n2[npwa]="0"' 00:13:24.010 18:11:34 nvme_scc -- nvme/functions.sh@23 -- # ng2n2[npwa]=0 00:13:24.010 18:11:34 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:24.010 18:11:34 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:24.010 18:11:34 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:24.010 18:11:34 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n2[npdg]="0"' 00:13:24.010 18:11:34 nvme_scc -- nvme/functions.sh@23 -- # ng2n2[npdg]=0 00:13:24.010 18:11:34 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:24.010 18:11:34 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:24.010 18:11:34 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:24.010 18:11:34 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n2[npda]="0"' 00:13:24.010 18:11:34 nvme_scc -- nvme/functions.sh@23 -- # ng2n2[npda]=0 00:13:24.010 18:11:34 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:24.010 18:11:34 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:24.010 18:11:34 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:24.010 18:11:34 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n2[nows]="0"' 00:13:24.010 18:11:34 nvme_scc -- nvme/functions.sh@23 -- # ng2n2[nows]=0 00:13:24.010 18:11:34 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:24.010 18:11:34 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:24.010 18:11:34 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:13:24.010 18:11:34 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n2[mssrl]="128"' 00:13:24.010 18:11:34 nvme_scc -- nvme/functions.sh@23 -- # ng2n2[mssrl]=128 00:13:24.010 18:11:34 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:24.010 18:11:34 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:24.010 18:11:34 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:13:24.010 18:11:34 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n2[mcl]="128"' 00:13:24.010 18:11:34 nvme_scc -- nvme/functions.sh@23 -- # ng2n2[mcl]=128 00:13:24.010 18:11:34 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:24.010 18:11:34 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:24.010 18:11:34 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 127 ]] 00:13:24.010 18:11:34 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n2[msrc]="127"' 00:13:24.010 18:11:34 nvme_scc -- nvme/functions.sh@23 -- # ng2n2[msrc]=127 00:13:24.010 18:11:34 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:24.010 18:11:34 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:24.010 18:11:34 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:24.010 18:11:34 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n2[nulbaf]="0"' 00:13:24.010 18:11:34 nvme_scc -- nvme/functions.sh@23 -- # ng2n2[nulbaf]=0 00:13:24.010 18:11:34 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:24.010 18:11:34 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:24.010 18:11:34 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:24.010 18:11:34 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n2[anagrpid]="0"' 00:13:24.010 18:11:34 nvme_scc -- nvme/functions.sh@23 -- # ng2n2[anagrpid]=0 00:13:24.010 18:11:34 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:24.010 18:11:34 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:24.010 18:11:34 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:24.010 18:11:34 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n2[nsattr]="0"' 00:13:24.010 18:11:34 nvme_scc -- nvme/functions.sh@23 -- # ng2n2[nsattr]=0 00:13:24.010 18:11:34 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:24.010 18:11:34 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:24.010 18:11:34 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:24.010 18:11:34 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n2[nvmsetid]="0"' 00:13:24.010 18:11:34 nvme_scc -- nvme/functions.sh@23 -- # ng2n2[nvmsetid]=0 00:13:24.010 18:11:34 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:24.010 18:11:34 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:24.010 18:11:34 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:24.010 18:11:34 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n2[endgid]="0"' 00:13:24.010 18:11:34 nvme_scc -- nvme/functions.sh@23 -- # ng2n2[endgid]=0 00:13:24.010 18:11:34 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:24.010 18:11:34 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:24.010 18:11:34 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 00000000000000000000000000000000 ]] 00:13:24.010 18:11:34 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n2[nguid]="00000000000000000000000000000000"' 00:13:24.010 18:11:34 nvme_scc -- nvme/functions.sh@23 -- # ng2n2[nguid]=00000000000000000000000000000000 00:13:24.010 18:11:34 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:24.010 18:11:34 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:24.010 18:11:34 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0000000000000000 ]] 00:13:24.010 18:11:34 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n2[eui64]="0000000000000000"' 00:13:24.010 18:11:34 nvme_scc -- nvme/functions.sh@23 -- # ng2n2[eui64]=0000000000000000 00:13:24.010 18:11:34 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:24.010 18:11:34 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:24.010 18:11:34 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:9 rp:0 ]] 00:13:24.010 18:11:34 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n2[lbaf0]="ms:0 lbads:9 rp:0 "' 00:13:24.010 18:11:34 nvme_scc -- nvme/functions.sh@23 -- # ng2n2[lbaf0]='ms:0 lbads:9 rp:0 ' 00:13:24.010 18:11:34 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:24.010 18:11:34 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:24.010 18:11:34 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:9 rp:0 ]] 00:13:24.010 18:11:34 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n2[lbaf1]="ms:8 lbads:9 rp:0 "' 00:13:24.010 18:11:34 nvme_scc -- nvme/functions.sh@23 -- # ng2n2[lbaf1]='ms:8 lbads:9 rp:0 ' 00:13:24.010 18:11:34 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:24.010 18:11:34 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:24.010 18:11:34 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:9 rp:0 ]] 00:13:24.010 18:11:34 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n2[lbaf2]="ms:16 lbads:9 rp:0 "' 00:13:24.010 18:11:34 nvme_scc -- nvme/functions.sh@23 -- # ng2n2[lbaf2]='ms:16 lbads:9 rp:0 ' 00:13:24.010 18:11:34 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:24.010 18:11:34 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:24.010 18:11:34 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:9 rp:0 ]] 00:13:24.010 18:11:34 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n2[lbaf3]="ms:64 lbads:9 rp:0 "' 00:13:24.011 18:11:34 nvme_scc -- nvme/functions.sh@23 -- # ng2n2[lbaf3]='ms:64 lbads:9 rp:0 ' 00:13:24.011 18:11:34 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:24.011 18:11:34 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:24.011 18:11:34 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:12 rp:0 (in use) ]] 00:13:24.011 18:11:34 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n2[lbaf4]="ms:0 lbads:12 rp:0 (in use)"' 00:13:24.011 18:11:34 nvme_scc -- nvme/functions.sh@23 -- # ng2n2[lbaf4]='ms:0 lbads:12 rp:0 (in use)' 00:13:24.011 18:11:34 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:24.011 18:11:34 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:24.011 18:11:34 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:12 rp:0 ]] 00:13:24.011 18:11:34 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n2[lbaf5]="ms:8 lbads:12 rp:0 "' 00:13:24.011 18:11:34 nvme_scc -- nvme/functions.sh@23 -- # ng2n2[lbaf5]='ms:8 lbads:12 rp:0 ' 00:13:24.011 18:11:34 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:24.011 18:11:34 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:24.011 18:11:34 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:12 rp:0 ]] 00:13:24.011 18:11:34 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n2[lbaf6]="ms:16 lbads:12 rp:0 "' 00:13:24.011 18:11:34 nvme_scc -- nvme/functions.sh@23 -- # ng2n2[lbaf6]='ms:16 lbads:12 rp:0 ' 00:13:24.011 18:11:34 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:24.011 18:11:34 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:24.011 18:11:34 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:12 rp:0 ]] 00:13:24.011 18:11:34 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n2[lbaf7]="ms:64 lbads:12 rp:0 "' 00:13:24.011 18:11:34 nvme_scc -- nvme/functions.sh@23 -- # ng2n2[lbaf7]='ms:64 lbads:12 rp:0 ' 00:13:24.011 18:11:34 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:24.011 18:11:34 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:24.011 18:11:34 nvme_scc -- nvme/functions.sh@58 -- # _ctrl_ns[${ns##*n}]=ng2n2 00:13:24.011 18:11:34 nvme_scc -- nvme/functions.sh@54 -- # for ns in "$ctrl/"@("ng${ctrl##*nvme}"|"${ctrl##*/}n")* 00:13:24.011 18:11:34 nvme_scc -- nvme/functions.sh@55 -- # [[ -e /sys/class/nvme/nvme2/ng2n3 ]] 00:13:24.011 18:11:34 nvme_scc -- nvme/functions.sh@56 -- # ns_dev=ng2n3 00:13:24.011 18:11:34 nvme_scc -- nvme/functions.sh@57 -- # nvme_get ng2n3 id-ns /dev/ng2n3 00:13:24.011 18:11:34 nvme_scc -- nvme/functions.sh@17 -- # local ref=ng2n3 reg val 00:13:24.011 18:11:34 nvme_scc -- nvme/functions.sh@18 -- # shift 00:13:24.011 18:11:34 nvme_scc -- nvme/functions.sh@20 -- # local -gA 'ng2n3=()' 00:13:24.011 18:11:34 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:24.011 18:11:34 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:24.011 18:11:34 nvme_scc -- nvme/functions.sh@16 -- # /usr/local/src/nvme-cli/nvme id-ns /dev/ng2n3 00:13:24.011 18:11:34 nvme_scc -- nvme/functions.sh@22 -- # [[ -n '' ]] 00:13:24.011 18:11:34 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:24.011 18:11:34 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:24.011 18:11:34 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:13:24.011 18:11:34 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n3[nsze]="0x100000"' 00:13:24.011 18:11:34 nvme_scc -- nvme/functions.sh@23 -- # ng2n3[nsze]=0x100000 00:13:24.011 18:11:34 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:24.011 18:11:34 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:24.011 18:11:34 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:13:24.011 18:11:34 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n3[ncap]="0x100000"' 00:13:24.011 18:11:34 nvme_scc -- nvme/functions.sh@23 -- # ng2n3[ncap]=0x100000 00:13:24.011 18:11:34 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:24.011 18:11:34 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:24.011 18:11:34 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:13:24.011 18:11:34 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n3[nuse]="0x100000"' 00:13:24.011 18:11:34 nvme_scc -- nvme/functions.sh@23 -- # ng2n3[nuse]=0x100000 00:13:24.011 18:11:34 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:24.011 18:11:34 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:24.011 18:11:34 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x14 ]] 00:13:24.011 18:11:34 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n3[nsfeat]="0x14"' 00:13:24.011 18:11:34 nvme_scc -- nvme/functions.sh@23 -- # ng2n3[nsfeat]=0x14 00:13:24.011 18:11:34 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:24.011 18:11:34 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:24.011 18:11:34 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 7 ]] 00:13:24.011 18:11:34 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n3[nlbaf]="7"' 00:13:24.011 18:11:34 nvme_scc -- nvme/functions.sh@23 -- # ng2n3[nlbaf]=7 00:13:24.011 18:11:34 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:24.011 18:11:34 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:24.011 18:11:34 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x4 ]] 00:13:24.011 18:11:34 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n3[flbas]="0x4"' 00:13:24.011 18:11:34 nvme_scc -- nvme/functions.sh@23 -- # ng2n3[flbas]=0x4 00:13:24.011 18:11:34 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:24.011 18:11:34 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:24.011 18:11:34 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:13:24.011 18:11:34 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n3[mc]="0x3"' 00:13:24.011 18:11:34 nvme_scc -- nvme/functions.sh@23 -- # ng2n3[mc]=0x3 00:13:24.011 18:11:34 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:24.011 18:11:34 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:24.011 18:11:34 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x1f ]] 00:13:24.011 18:11:34 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n3[dpc]="0x1f"' 00:13:24.011 18:11:34 nvme_scc -- nvme/functions.sh@23 -- # ng2n3[dpc]=0x1f 00:13:24.011 18:11:34 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:24.011 18:11:34 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:24.011 18:11:34 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:24.011 18:11:34 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n3[dps]="0"' 00:13:24.011 18:11:34 nvme_scc -- nvme/functions.sh@23 -- # ng2n3[dps]=0 00:13:24.011 18:11:34 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:24.011 18:11:34 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:24.011 18:11:34 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:24.011 18:11:34 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n3[nmic]="0"' 00:13:24.011 18:11:34 nvme_scc -- nvme/functions.sh@23 -- # ng2n3[nmic]=0 00:13:24.011 18:11:34 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:24.011 18:11:34 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:24.011 18:11:34 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:24.011 18:11:34 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n3[rescap]="0"' 00:13:24.011 18:11:34 nvme_scc -- nvme/functions.sh@23 -- # ng2n3[rescap]=0 00:13:24.011 18:11:34 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:24.011 18:11:34 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:24.011 18:11:34 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:24.011 18:11:34 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n3[fpi]="0"' 00:13:24.011 18:11:34 nvme_scc -- nvme/functions.sh@23 -- # ng2n3[fpi]=0 00:13:24.011 18:11:34 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:24.011 18:11:34 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:24.011 18:11:34 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:13:24.011 18:11:34 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n3[dlfeat]="1"' 00:13:24.011 18:11:34 nvme_scc -- nvme/functions.sh@23 -- # ng2n3[dlfeat]=1 00:13:24.011 18:11:34 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:24.011 18:11:34 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:24.011 18:11:34 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:24.011 18:11:34 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n3[nawun]="0"' 00:13:24.011 18:11:34 nvme_scc -- nvme/functions.sh@23 -- # ng2n3[nawun]=0 00:13:24.011 18:11:34 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:24.011 18:11:34 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:24.011 18:11:34 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:24.011 18:11:34 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n3[nawupf]="0"' 00:13:24.011 18:11:34 nvme_scc -- nvme/functions.sh@23 -- # ng2n3[nawupf]=0 00:13:24.011 18:11:34 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:24.011 18:11:34 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:24.011 18:11:34 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:24.011 18:11:34 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n3[nacwu]="0"' 00:13:24.011 18:11:34 nvme_scc -- nvme/functions.sh@23 -- # ng2n3[nacwu]=0 00:13:24.011 18:11:34 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:24.011 18:11:34 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:24.011 18:11:34 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:24.011 18:11:34 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n3[nabsn]="0"' 00:13:24.011 18:11:34 nvme_scc -- nvme/functions.sh@23 -- # ng2n3[nabsn]=0 00:13:24.011 18:11:34 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:24.011 18:11:34 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:24.011 18:11:34 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:24.011 18:11:34 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n3[nabo]="0"' 00:13:24.011 18:11:34 nvme_scc -- nvme/functions.sh@23 -- # ng2n3[nabo]=0 00:13:24.011 18:11:34 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:24.011 18:11:34 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:24.011 18:11:34 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:24.011 18:11:34 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n3[nabspf]="0"' 00:13:24.012 18:11:34 nvme_scc -- nvme/functions.sh@23 -- # ng2n3[nabspf]=0 00:13:24.012 18:11:34 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:24.012 18:11:34 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:24.012 18:11:34 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:24.012 18:11:34 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n3[noiob]="0"' 00:13:24.012 18:11:34 nvme_scc -- nvme/functions.sh@23 -- # ng2n3[noiob]=0 00:13:24.012 18:11:34 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:24.012 18:11:34 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:24.012 18:11:34 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:24.012 18:11:34 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n3[nvmcap]="0"' 00:13:24.012 18:11:34 nvme_scc -- nvme/functions.sh@23 -- # ng2n3[nvmcap]=0 00:13:24.012 18:11:34 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:24.012 18:11:34 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:24.012 18:11:34 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:24.012 18:11:34 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n3[npwg]="0"' 00:13:24.012 18:11:34 nvme_scc -- nvme/functions.sh@23 -- # ng2n3[npwg]=0 00:13:24.012 18:11:34 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:24.012 18:11:34 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:24.012 18:11:34 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:24.012 18:11:34 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n3[npwa]="0"' 00:13:24.012 18:11:34 nvme_scc -- nvme/functions.sh@23 -- # ng2n3[npwa]=0 00:13:24.012 18:11:34 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:24.012 18:11:34 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:24.012 18:11:34 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:24.012 18:11:34 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n3[npdg]="0"' 00:13:24.012 18:11:34 nvme_scc -- nvme/functions.sh@23 -- # ng2n3[npdg]=0 00:13:24.012 18:11:34 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:24.012 18:11:34 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:24.012 18:11:34 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:24.012 18:11:34 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n3[npda]="0"' 00:13:24.012 18:11:34 nvme_scc -- nvme/functions.sh@23 -- # ng2n3[npda]=0 00:13:24.012 18:11:34 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:24.012 18:11:34 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:24.012 18:11:34 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:24.012 18:11:34 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n3[nows]="0"' 00:13:24.012 18:11:34 nvme_scc -- nvme/functions.sh@23 -- # ng2n3[nows]=0 00:13:24.012 18:11:34 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:24.012 18:11:34 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:24.012 18:11:34 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:13:24.012 18:11:34 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n3[mssrl]="128"' 00:13:24.012 18:11:34 nvme_scc -- nvme/functions.sh@23 -- # ng2n3[mssrl]=128 00:13:24.012 18:11:34 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:24.012 18:11:34 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:24.012 18:11:34 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:13:24.012 18:11:34 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n3[mcl]="128"' 00:13:24.012 18:11:34 nvme_scc -- nvme/functions.sh@23 -- # ng2n3[mcl]=128 00:13:24.012 18:11:34 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:24.012 18:11:34 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:24.012 18:11:34 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 127 ]] 00:13:24.012 18:11:34 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n3[msrc]="127"' 00:13:24.012 18:11:34 nvme_scc -- nvme/functions.sh@23 -- # ng2n3[msrc]=127 00:13:24.012 18:11:34 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:24.012 18:11:34 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:24.012 18:11:34 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:24.012 18:11:34 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n3[nulbaf]="0"' 00:13:24.012 18:11:34 nvme_scc -- nvme/functions.sh@23 -- # ng2n3[nulbaf]=0 00:13:24.012 18:11:34 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:24.012 18:11:34 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:24.012 18:11:34 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:24.012 18:11:34 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n3[anagrpid]="0"' 00:13:24.012 18:11:34 nvme_scc -- nvme/functions.sh@23 -- # ng2n3[anagrpid]=0 00:13:24.012 18:11:34 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:24.012 18:11:34 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:24.012 18:11:34 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:24.012 18:11:34 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n3[nsattr]="0"' 00:13:24.012 18:11:34 nvme_scc -- nvme/functions.sh@23 -- # ng2n3[nsattr]=0 00:13:24.012 18:11:34 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:24.012 18:11:34 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:24.012 18:11:34 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:24.012 18:11:34 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n3[nvmsetid]="0"' 00:13:24.012 18:11:34 nvme_scc -- nvme/functions.sh@23 -- # ng2n3[nvmsetid]=0 00:13:24.012 18:11:34 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:24.012 18:11:34 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:24.012 18:11:34 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:24.012 18:11:34 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n3[endgid]="0"' 00:13:24.012 18:11:34 nvme_scc -- nvme/functions.sh@23 -- # ng2n3[endgid]=0 00:13:24.012 18:11:34 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:24.012 18:11:34 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:24.012 18:11:34 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 00000000000000000000000000000000 ]] 00:13:24.012 18:11:34 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n3[nguid]="00000000000000000000000000000000"' 00:13:24.012 18:11:34 nvme_scc -- nvme/functions.sh@23 -- # ng2n3[nguid]=00000000000000000000000000000000 00:13:24.012 18:11:34 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:24.012 18:11:34 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:24.012 18:11:34 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0000000000000000 ]] 00:13:24.012 18:11:34 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n3[eui64]="0000000000000000"' 00:13:24.012 18:11:34 nvme_scc -- nvme/functions.sh@23 -- # ng2n3[eui64]=0000000000000000 00:13:24.012 18:11:34 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:24.012 18:11:34 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:24.012 18:11:34 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:9 rp:0 ]] 00:13:24.012 18:11:34 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n3[lbaf0]="ms:0 lbads:9 rp:0 "' 00:13:24.012 18:11:34 nvme_scc -- nvme/functions.sh@23 -- # ng2n3[lbaf0]='ms:0 lbads:9 rp:0 ' 00:13:24.012 18:11:34 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:24.012 18:11:34 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:24.012 18:11:34 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:9 rp:0 ]] 00:13:24.012 18:11:34 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n3[lbaf1]="ms:8 lbads:9 rp:0 "' 00:13:24.012 18:11:34 nvme_scc -- nvme/functions.sh@23 -- # ng2n3[lbaf1]='ms:8 lbads:9 rp:0 ' 00:13:24.012 18:11:34 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:24.012 18:11:34 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:24.012 18:11:34 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:9 rp:0 ]] 00:13:24.012 18:11:34 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n3[lbaf2]="ms:16 lbads:9 rp:0 "' 00:13:24.012 18:11:34 nvme_scc -- nvme/functions.sh@23 -- # ng2n3[lbaf2]='ms:16 lbads:9 rp:0 ' 00:13:24.012 18:11:34 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:24.012 18:11:34 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:24.012 18:11:34 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:9 rp:0 ]] 00:13:24.012 18:11:34 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n3[lbaf3]="ms:64 lbads:9 rp:0 "' 00:13:24.012 18:11:34 nvme_scc -- nvme/functions.sh@23 -- # ng2n3[lbaf3]='ms:64 lbads:9 rp:0 ' 00:13:24.012 18:11:34 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:24.012 18:11:34 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:24.012 18:11:34 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:12 rp:0 (in use) ]] 00:13:24.012 18:11:34 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n3[lbaf4]="ms:0 lbads:12 rp:0 (in use)"' 00:13:24.012 18:11:34 nvme_scc -- nvme/functions.sh@23 -- # ng2n3[lbaf4]='ms:0 lbads:12 rp:0 (in use)' 00:13:24.012 18:11:34 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:24.012 18:11:34 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:24.012 18:11:34 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:12 rp:0 ]] 00:13:24.012 18:11:34 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n3[lbaf5]="ms:8 lbads:12 rp:0 "' 00:13:24.012 18:11:34 nvme_scc -- nvme/functions.sh@23 -- # ng2n3[lbaf5]='ms:8 lbads:12 rp:0 ' 00:13:24.012 18:11:34 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:24.012 18:11:34 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:24.012 18:11:34 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:12 rp:0 ]] 00:13:24.012 18:11:34 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n3[lbaf6]="ms:16 lbads:12 rp:0 "' 00:13:24.012 18:11:34 nvme_scc -- nvme/functions.sh@23 -- # ng2n3[lbaf6]='ms:16 lbads:12 rp:0 ' 00:13:24.012 18:11:34 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:24.012 18:11:34 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:24.012 18:11:34 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:12 rp:0 ]] 00:13:24.012 18:11:34 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n3[lbaf7]="ms:64 lbads:12 rp:0 "' 00:13:24.012 18:11:34 nvme_scc -- nvme/functions.sh@23 -- # ng2n3[lbaf7]='ms:64 lbads:12 rp:0 ' 00:13:24.012 18:11:34 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:24.012 18:11:34 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:24.012 18:11:34 nvme_scc -- nvme/functions.sh@58 -- # _ctrl_ns[${ns##*n}]=ng2n3 00:13:24.012 18:11:34 nvme_scc -- nvme/functions.sh@54 -- # for ns in "$ctrl/"@("ng${ctrl##*nvme}"|"${ctrl##*/}n")* 00:13:24.012 18:11:34 nvme_scc -- nvme/functions.sh@55 -- # [[ -e /sys/class/nvme/nvme2/nvme2n1 ]] 00:13:24.012 18:11:34 nvme_scc -- nvme/functions.sh@56 -- # ns_dev=nvme2n1 00:13:24.012 18:11:34 nvme_scc -- nvme/functions.sh@57 -- # nvme_get nvme2n1 id-ns /dev/nvme2n1 00:13:24.012 18:11:34 nvme_scc -- nvme/functions.sh@17 -- # local ref=nvme2n1 reg val 00:13:24.012 18:11:34 nvme_scc -- nvme/functions.sh@18 -- # shift 00:13:24.012 18:11:34 nvme_scc -- nvme/functions.sh@20 -- # local -gA 'nvme2n1=()' 00:13:24.012 18:11:34 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:24.012 18:11:34 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:24.012 18:11:34 nvme_scc -- nvme/functions.sh@16 -- # /usr/local/src/nvme-cli/nvme id-ns /dev/nvme2n1 00:13:24.012 18:11:34 nvme_scc -- nvme/functions.sh@22 -- # [[ -n '' ]] 00:13:24.012 18:11:34 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:24.012 18:11:34 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:24.012 18:11:34 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:13:24.012 18:11:34 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[nsze]="0x100000"' 00:13:24.012 18:11:34 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[nsze]=0x100000 00:13:24.012 18:11:34 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:24.012 18:11:34 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:24.012 18:11:34 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:13:24.012 18:11:34 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[ncap]="0x100000"' 00:13:24.012 18:11:34 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[ncap]=0x100000 00:13:24.012 18:11:34 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:24.013 18:11:34 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:24.013 18:11:34 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:13:24.013 18:11:34 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[nuse]="0x100000"' 00:13:24.013 18:11:34 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[nuse]=0x100000 00:13:24.013 18:11:34 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:24.013 18:11:34 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:24.013 18:11:34 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x14 ]] 00:13:24.013 18:11:34 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[nsfeat]="0x14"' 00:13:24.013 18:11:34 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[nsfeat]=0x14 00:13:24.013 18:11:34 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:24.013 18:11:34 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:24.013 18:11:34 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 7 ]] 00:13:24.013 18:11:34 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[nlbaf]="7"' 00:13:24.013 18:11:34 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[nlbaf]=7 00:13:24.013 18:11:34 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:24.013 18:11:34 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:24.013 18:11:34 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x4 ]] 00:13:24.013 18:11:34 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[flbas]="0x4"' 00:13:24.013 18:11:34 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[flbas]=0x4 00:13:24.013 18:11:34 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:24.013 18:11:34 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:24.013 18:11:34 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:13:24.013 18:11:34 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[mc]="0x3"' 00:13:24.013 18:11:34 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[mc]=0x3 00:13:24.013 18:11:34 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:24.013 18:11:34 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:24.013 18:11:34 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x1f ]] 00:13:24.013 18:11:34 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[dpc]="0x1f"' 00:13:24.013 18:11:34 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[dpc]=0x1f 00:13:24.013 18:11:34 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:24.013 18:11:34 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:24.013 18:11:34 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:24.013 18:11:34 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[dps]="0"' 00:13:24.013 18:11:34 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[dps]=0 00:13:24.013 18:11:34 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:24.013 18:11:34 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:24.013 18:11:34 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:24.013 18:11:34 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[nmic]="0"' 00:13:24.013 18:11:34 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[nmic]=0 00:13:24.013 18:11:34 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:24.013 18:11:34 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:24.013 18:11:34 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:24.013 18:11:34 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[rescap]="0"' 00:13:24.013 18:11:34 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[rescap]=0 00:13:24.013 18:11:34 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:24.013 18:11:34 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:24.013 18:11:34 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:24.013 18:11:34 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[fpi]="0"' 00:13:24.013 18:11:34 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[fpi]=0 00:13:24.013 18:11:34 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:24.013 18:11:34 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:24.013 18:11:34 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:13:24.013 18:11:34 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[dlfeat]="1"' 00:13:24.013 18:11:34 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[dlfeat]=1 00:13:24.013 18:11:34 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:24.013 18:11:34 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:24.013 18:11:34 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:24.013 18:11:34 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[nawun]="0"' 00:13:24.013 18:11:34 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[nawun]=0 00:13:24.013 18:11:34 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:24.013 18:11:34 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:24.013 18:11:34 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:24.013 18:11:34 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[nawupf]="0"' 00:13:24.013 18:11:34 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[nawupf]=0 00:13:24.013 18:11:34 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:24.013 18:11:34 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:24.013 18:11:34 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:24.013 18:11:34 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[nacwu]="0"' 00:13:24.013 18:11:34 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[nacwu]=0 00:13:24.013 18:11:34 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:24.013 18:11:34 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:24.013 18:11:34 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:24.013 18:11:34 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[nabsn]="0"' 00:13:24.013 18:11:34 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[nabsn]=0 00:13:24.013 18:11:34 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:24.013 18:11:34 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:24.013 18:11:34 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:24.013 18:11:34 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[nabo]="0"' 00:13:24.013 18:11:34 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[nabo]=0 00:13:24.013 18:11:34 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:24.013 18:11:34 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:24.013 18:11:34 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:24.013 18:11:34 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[nabspf]="0"' 00:13:24.013 18:11:34 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[nabspf]=0 00:13:24.013 18:11:34 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:24.013 18:11:34 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:24.013 18:11:34 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:24.013 18:11:34 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[noiob]="0"' 00:13:24.013 18:11:34 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[noiob]=0 00:13:24.013 18:11:34 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:24.013 18:11:34 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:24.013 18:11:34 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:24.013 18:11:34 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[nvmcap]="0"' 00:13:24.013 18:11:34 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[nvmcap]=0 00:13:24.013 18:11:34 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:24.013 18:11:34 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:24.013 18:11:34 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:24.013 18:11:34 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[npwg]="0"' 00:13:24.013 18:11:34 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[npwg]=0 00:13:24.013 18:11:34 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:24.013 18:11:34 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:24.013 18:11:34 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:24.013 18:11:34 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[npwa]="0"' 00:13:24.013 18:11:34 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[npwa]=0 00:13:24.013 18:11:34 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:24.013 18:11:34 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:24.013 18:11:34 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:24.013 18:11:34 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[npdg]="0"' 00:13:24.013 18:11:34 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[npdg]=0 00:13:24.013 18:11:34 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:24.013 18:11:34 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:24.013 18:11:34 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:24.013 18:11:34 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[npda]="0"' 00:13:24.013 18:11:34 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[npda]=0 00:13:24.013 18:11:34 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:24.013 18:11:34 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:24.013 18:11:34 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:24.013 18:11:34 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[nows]="0"' 00:13:24.013 18:11:34 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[nows]=0 00:13:24.013 18:11:34 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:24.013 18:11:34 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:24.013 18:11:34 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:13:24.013 18:11:34 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[mssrl]="128"' 00:13:24.013 18:11:34 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[mssrl]=128 00:13:24.013 18:11:34 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:24.013 18:11:34 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:24.013 18:11:34 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:13:24.013 18:11:34 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[mcl]="128"' 00:13:24.013 18:11:34 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[mcl]=128 00:13:24.013 18:11:34 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:24.013 18:11:34 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:24.013 18:11:34 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 127 ]] 00:13:24.013 18:11:34 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[msrc]="127"' 00:13:24.013 18:11:34 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[msrc]=127 00:13:24.013 18:11:34 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:24.013 18:11:34 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:24.013 18:11:34 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:24.013 18:11:34 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[nulbaf]="0"' 00:13:24.013 18:11:34 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[nulbaf]=0 00:13:24.013 18:11:34 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:24.013 18:11:34 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:24.013 18:11:34 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:24.013 18:11:34 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[anagrpid]="0"' 00:13:24.013 18:11:34 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[anagrpid]=0 00:13:24.013 18:11:34 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:24.013 18:11:34 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:24.013 18:11:34 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:24.013 18:11:34 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[nsattr]="0"' 00:13:24.013 18:11:34 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[nsattr]=0 00:13:24.013 18:11:34 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:24.013 18:11:34 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:24.013 18:11:34 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:24.013 18:11:34 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[nvmsetid]="0"' 00:13:24.013 18:11:34 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[nvmsetid]=0 00:13:24.013 18:11:34 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:24.013 18:11:34 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:24.013 18:11:34 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:24.014 18:11:34 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[endgid]="0"' 00:13:24.014 18:11:34 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[endgid]=0 00:13:24.014 18:11:34 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:24.014 18:11:34 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:24.014 18:11:34 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 00000000000000000000000000000000 ]] 00:13:24.014 18:11:34 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[nguid]="00000000000000000000000000000000"' 00:13:24.014 18:11:34 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[nguid]=00000000000000000000000000000000 00:13:24.014 18:11:34 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:24.014 18:11:34 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:24.014 18:11:34 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0000000000000000 ]] 00:13:24.014 18:11:34 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[eui64]="0000000000000000"' 00:13:24.014 18:11:34 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[eui64]=0000000000000000 00:13:24.014 18:11:34 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:24.014 18:11:34 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:24.014 18:11:34 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:9 rp:0 ]] 00:13:24.014 18:11:34 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[lbaf0]="ms:0 lbads:9 rp:0 "' 00:13:24.014 18:11:34 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[lbaf0]='ms:0 lbads:9 rp:0 ' 00:13:24.014 18:11:34 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:24.014 18:11:34 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:24.014 18:11:34 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:9 rp:0 ]] 00:13:24.014 18:11:34 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[lbaf1]="ms:8 lbads:9 rp:0 "' 00:13:24.014 18:11:34 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[lbaf1]='ms:8 lbads:9 rp:0 ' 00:13:24.014 18:11:34 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:24.014 18:11:34 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:24.014 18:11:34 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:9 rp:0 ]] 00:13:24.014 18:11:34 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[lbaf2]="ms:16 lbads:9 rp:0 "' 00:13:24.014 18:11:34 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[lbaf2]='ms:16 lbads:9 rp:0 ' 00:13:24.014 18:11:34 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:24.014 18:11:34 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:24.014 18:11:34 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:9 rp:0 ]] 00:13:24.014 18:11:34 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[lbaf3]="ms:64 lbads:9 rp:0 "' 00:13:24.014 18:11:34 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[lbaf3]='ms:64 lbads:9 rp:0 ' 00:13:24.014 18:11:34 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:24.014 18:11:34 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:24.014 18:11:34 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:12 rp:0 (in use) ]] 00:13:24.014 18:11:34 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[lbaf4]="ms:0 lbads:12 rp:0 (in use)"' 00:13:24.014 18:11:34 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[lbaf4]='ms:0 lbads:12 rp:0 (in use)' 00:13:24.014 18:11:34 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:24.014 18:11:34 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:24.014 18:11:34 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:12 rp:0 ]] 00:13:24.014 18:11:34 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[lbaf5]="ms:8 lbads:12 rp:0 "' 00:13:24.014 18:11:34 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[lbaf5]='ms:8 lbads:12 rp:0 ' 00:13:24.014 18:11:34 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:24.014 18:11:34 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:24.014 18:11:34 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:12 rp:0 ]] 00:13:24.014 18:11:34 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[lbaf6]="ms:16 lbads:12 rp:0 "' 00:13:24.014 18:11:34 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[lbaf6]='ms:16 lbads:12 rp:0 ' 00:13:24.014 18:11:34 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:24.014 18:11:34 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:24.014 18:11:34 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:12 rp:0 ]] 00:13:24.014 18:11:34 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[lbaf7]="ms:64 lbads:12 rp:0 "' 00:13:24.014 18:11:34 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[lbaf7]='ms:64 lbads:12 rp:0 ' 00:13:24.014 18:11:34 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:24.014 18:11:34 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:24.014 18:11:34 nvme_scc -- nvme/functions.sh@58 -- # _ctrl_ns[${ns##*n}]=nvme2n1 00:13:24.014 18:11:34 nvme_scc -- nvme/functions.sh@54 -- # for ns in "$ctrl/"@("ng${ctrl##*nvme}"|"${ctrl##*/}n")* 00:13:24.014 18:11:34 nvme_scc -- nvme/functions.sh@55 -- # [[ -e /sys/class/nvme/nvme2/nvme2n2 ]] 00:13:24.014 18:11:34 nvme_scc -- nvme/functions.sh@56 -- # ns_dev=nvme2n2 00:13:24.014 18:11:34 nvme_scc -- nvme/functions.sh@57 -- # nvme_get nvme2n2 id-ns /dev/nvme2n2 00:13:24.014 18:11:34 nvme_scc -- nvme/functions.sh@17 -- # local ref=nvme2n2 reg val 00:13:24.014 18:11:34 nvme_scc -- nvme/functions.sh@18 -- # shift 00:13:24.014 18:11:34 nvme_scc -- nvme/functions.sh@20 -- # local -gA 'nvme2n2=()' 00:13:24.014 18:11:34 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:24.014 18:11:34 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:24.014 18:11:34 nvme_scc -- nvme/functions.sh@16 -- # /usr/local/src/nvme-cli/nvme id-ns /dev/nvme2n2 00:13:24.014 18:11:34 nvme_scc -- nvme/functions.sh@22 -- # [[ -n '' ]] 00:13:24.014 18:11:34 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:24.014 18:11:34 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:24.014 18:11:34 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:13:24.014 18:11:34 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[nsze]="0x100000"' 00:13:24.014 18:11:34 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[nsze]=0x100000 00:13:24.014 18:11:34 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:24.014 18:11:34 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:24.014 18:11:34 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:13:24.014 18:11:34 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[ncap]="0x100000"' 00:13:24.014 18:11:34 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[ncap]=0x100000 00:13:24.014 18:11:34 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:24.014 18:11:34 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:24.014 18:11:34 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:13:24.014 18:11:34 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[nuse]="0x100000"' 00:13:24.014 18:11:34 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[nuse]=0x100000 00:13:24.014 18:11:34 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:24.014 18:11:34 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:24.014 18:11:34 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x14 ]] 00:13:24.014 18:11:34 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[nsfeat]="0x14"' 00:13:24.014 18:11:34 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[nsfeat]=0x14 00:13:24.014 18:11:34 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:24.014 18:11:34 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:24.014 18:11:34 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 7 ]] 00:13:24.014 18:11:34 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[nlbaf]="7"' 00:13:24.014 18:11:34 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[nlbaf]=7 00:13:24.014 18:11:34 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:24.014 18:11:34 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:24.014 18:11:34 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x4 ]] 00:13:24.014 18:11:34 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[flbas]="0x4"' 00:13:24.014 18:11:34 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[flbas]=0x4 00:13:24.014 18:11:34 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:24.014 18:11:34 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:24.014 18:11:34 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:13:24.014 18:11:34 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[mc]="0x3"' 00:13:24.014 18:11:34 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[mc]=0x3 00:13:24.014 18:11:34 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:24.014 18:11:34 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:24.014 18:11:34 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x1f ]] 00:13:24.014 18:11:34 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[dpc]="0x1f"' 00:13:24.014 18:11:34 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[dpc]=0x1f 00:13:24.014 18:11:34 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:24.014 18:11:34 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:24.014 18:11:34 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:24.014 18:11:34 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[dps]="0"' 00:13:24.014 18:11:34 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[dps]=0 00:13:24.014 18:11:34 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:24.014 18:11:34 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:24.014 18:11:34 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:24.014 18:11:34 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[nmic]="0"' 00:13:24.014 18:11:34 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[nmic]=0 00:13:24.014 18:11:34 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:24.014 18:11:34 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:24.014 18:11:34 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:24.014 18:11:34 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[rescap]="0"' 00:13:24.014 18:11:34 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[rescap]=0 00:13:24.014 18:11:34 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:24.014 18:11:34 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:24.014 18:11:34 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:24.014 18:11:34 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[fpi]="0"' 00:13:24.014 18:11:34 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[fpi]=0 00:13:24.014 18:11:34 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:24.014 18:11:34 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:24.014 18:11:34 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:13:24.014 18:11:34 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[dlfeat]="1"' 00:13:24.014 18:11:34 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[dlfeat]=1 00:13:24.014 18:11:34 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:24.014 18:11:34 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:24.014 18:11:34 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:24.014 18:11:34 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[nawun]="0"' 00:13:24.014 18:11:34 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[nawun]=0 00:13:24.014 18:11:34 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:24.014 18:11:34 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:24.014 18:11:34 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:24.014 18:11:34 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[nawupf]="0"' 00:13:24.014 18:11:34 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[nawupf]=0 00:13:24.014 18:11:34 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:24.014 18:11:34 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:24.014 18:11:34 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:24.014 18:11:34 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[nacwu]="0"' 00:13:24.014 18:11:34 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[nacwu]=0 00:13:24.014 18:11:34 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:24.014 18:11:34 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:24.014 18:11:34 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:24.015 18:11:34 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[nabsn]="0"' 00:13:24.015 18:11:34 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[nabsn]=0 00:13:24.015 18:11:34 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:24.015 18:11:34 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:24.015 18:11:34 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:24.015 18:11:34 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[nabo]="0"' 00:13:24.015 18:11:34 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[nabo]=0 00:13:24.015 18:11:34 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:24.015 18:11:34 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:24.015 18:11:34 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:24.015 18:11:34 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[nabspf]="0"' 00:13:24.015 18:11:34 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[nabspf]=0 00:13:24.015 18:11:34 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:24.015 18:11:34 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:24.015 18:11:34 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:24.015 18:11:34 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[noiob]="0"' 00:13:24.015 18:11:34 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[noiob]=0 00:13:24.015 18:11:34 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:24.015 18:11:34 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:24.015 18:11:34 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:24.015 18:11:34 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[nvmcap]="0"' 00:13:24.015 18:11:34 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[nvmcap]=0 00:13:24.015 18:11:34 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:24.015 18:11:34 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:24.015 18:11:34 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:24.015 18:11:34 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[npwg]="0"' 00:13:24.015 18:11:34 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[npwg]=0 00:13:24.015 18:11:34 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:24.015 18:11:34 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:24.015 18:11:34 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:24.015 18:11:34 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[npwa]="0"' 00:13:24.015 18:11:34 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[npwa]=0 00:13:24.015 18:11:34 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:24.015 18:11:34 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:24.015 18:11:34 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:24.015 18:11:34 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[npdg]="0"' 00:13:24.015 18:11:34 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[npdg]=0 00:13:24.015 18:11:34 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:24.015 18:11:34 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:24.015 18:11:34 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:24.015 18:11:34 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[npda]="0"' 00:13:24.015 18:11:34 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[npda]=0 00:13:24.015 18:11:34 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:24.015 18:11:34 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:24.015 18:11:34 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:24.015 18:11:34 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[nows]="0"' 00:13:24.015 18:11:34 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[nows]=0 00:13:24.015 18:11:34 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:24.015 18:11:34 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:24.015 18:11:34 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:13:24.015 18:11:34 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[mssrl]="128"' 00:13:24.015 18:11:34 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[mssrl]=128 00:13:24.015 18:11:34 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:24.015 18:11:34 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:24.015 18:11:34 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:13:24.015 18:11:34 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[mcl]="128"' 00:13:24.015 18:11:34 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[mcl]=128 00:13:24.015 18:11:34 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:24.015 18:11:34 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:24.015 18:11:34 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 127 ]] 00:13:24.015 18:11:34 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[msrc]="127"' 00:13:24.015 18:11:34 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[msrc]=127 00:13:24.015 18:11:34 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:24.015 18:11:34 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:24.015 18:11:34 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:24.015 18:11:34 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[nulbaf]="0"' 00:13:24.015 18:11:34 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[nulbaf]=0 00:13:24.015 18:11:34 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:24.015 18:11:34 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:24.015 18:11:34 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:24.015 18:11:34 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[anagrpid]="0"' 00:13:24.015 18:11:34 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[anagrpid]=0 00:13:24.015 18:11:34 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:24.015 18:11:34 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:24.015 18:11:34 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:24.015 18:11:34 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[nsattr]="0"' 00:13:24.015 18:11:34 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[nsattr]=0 00:13:24.015 18:11:34 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:24.015 18:11:34 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:24.015 18:11:34 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:24.015 18:11:34 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[nvmsetid]="0"' 00:13:24.015 18:11:34 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[nvmsetid]=0 00:13:24.015 18:11:34 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:24.015 18:11:34 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:24.015 18:11:34 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:24.015 18:11:34 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[endgid]="0"' 00:13:24.015 18:11:34 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[endgid]=0 00:13:24.015 18:11:34 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:24.015 18:11:34 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:24.015 18:11:34 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 00000000000000000000000000000000 ]] 00:13:24.015 18:11:34 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[nguid]="00000000000000000000000000000000"' 00:13:24.015 18:11:34 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[nguid]=00000000000000000000000000000000 00:13:24.015 18:11:34 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:24.015 18:11:34 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:24.015 18:11:34 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0000000000000000 ]] 00:13:24.015 18:11:34 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[eui64]="0000000000000000"' 00:13:24.015 18:11:34 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[eui64]=0000000000000000 00:13:24.015 18:11:34 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:24.015 18:11:34 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:24.015 18:11:34 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:9 rp:0 ]] 00:13:24.015 18:11:34 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[lbaf0]="ms:0 lbads:9 rp:0 "' 00:13:24.015 18:11:34 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[lbaf0]='ms:0 lbads:9 rp:0 ' 00:13:24.015 18:11:34 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:24.015 18:11:34 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:24.015 18:11:34 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:9 rp:0 ]] 00:13:24.015 18:11:34 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[lbaf1]="ms:8 lbads:9 rp:0 "' 00:13:24.015 18:11:34 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[lbaf1]='ms:8 lbads:9 rp:0 ' 00:13:24.015 18:11:34 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:24.015 18:11:34 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:24.015 18:11:34 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:9 rp:0 ]] 00:13:24.015 18:11:34 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[lbaf2]="ms:16 lbads:9 rp:0 "' 00:13:24.015 18:11:34 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[lbaf2]='ms:16 lbads:9 rp:0 ' 00:13:24.015 18:11:34 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:24.015 18:11:34 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:24.015 18:11:34 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:9 rp:0 ]] 00:13:24.015 18:11:34 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[lbaf3]="ms:64 lbads:9 rp:0 "' 00:13:24.015 18:11:34 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[lbaf3]='ms:64 lbads:9 rp:0 ' 00:13:24.015 18:11:34 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:24.015 18:11:34 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:24.015 18:11:34 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:12 rp:0 (in use) ]] 00:13:24.015 18:11:34 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[lbaf4]="ms:0 lbads:12 rp:0 (in use)"' 00:13:24.015 18:11:34 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[lbaf4]='ms:0 lbads:12 rp:0 (in use)' 00:13:24.015 18:11:34 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:24.015 18:11:34 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:24.015 18:11:34 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:12 rp:0 ]] 00:13:24.015 18:11:34 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[lbaf5]="ms:8 lbads:12 rp:0 "' 00:13:24.015 18:11:34 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[lbaf5]='ms:8 lbads:12 rp:0 ' 00:13:24.015 18:11:34 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:24.015 18:11:34 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:24.015 18:11:34 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:12 rp:0 ]] 00:13:24.015 18:11:34 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[lbaf6]="ms:16 lbads:12 rp:0 "' 00:13:24.015 18:11:34 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[lbaf6]='ms:16 lbads:12 rp:0 ' 00:13:24.015 18:11:34 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:24.016 18:11:34 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:24.016 18:11:34 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:12 rp:0 ]] 00:13:24.016 18:11:34 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[lbaf7]="ms:64 lbads:12 rp:0 "' 00:13:24.016 18:11:34 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[lbaf7]='ms:64 lbads:12 rp:0 ' 00:13:24.016 18:11:34 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:24.016 18:11:34 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:24.016 18:11:34 nvme_scc -- nvme/functions.sh@58 -- # _ctrl_ns[${ns##*n}]=nvme2n2 00:13:24.016 18:11:34 nvme_scc -- nvme/functions.sh@54 -- # for ns in "$ctrl/"@("ng${ctrl##*nvme}"|"${ctrl##*/}n")* 00:13:24.016 18:11:34 nvme_scc -- nvme/functions.sh@55 -- # [[ -e /sys/class/nvme/nvme2/nvme2n3 ]] 00:13:24.016 18:11:34 nvme_scc -- nvme/functions.sh@56 -- # ns_dev=nvme2n3 00:13:24.016 18:11:34 nvme_scc -- nvme/functions.sh@57 -- # nvme_get nvme2n3 id-ns /dev/nvme2n3 00:13:24.016 18:11:34 nvme_scc -- nvme/functions.sh@17 -- # local ref=nvme2n3 reg val 00:13:24.016 18:11:34 nvme_scc -- nvme/functions.sh@18 -- # shift 00:13:24.016 18:11:34 nvme_scc -- nvme/functions.sh@20 -- # local -gA 'nvme2n3=()' 00:13:24.016 18:11:34 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:24.016 18:11:34 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:24.016 18:11:34 nvme_scc -- nvme/functions.sh@16 -- # /usr/local/src/nvme-cli/nvme id-ns /dev/nvme2n3 00:13:24.016 18:11:34 nvme_scc -- nvme/functions.sh@22 -- # [[ -n '' ]] 00:13:24.016 18:11:34 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:24.016 18:11:34 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:24.016 18:11:34 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:13:24.016 18:11:34 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[nsze]="0x100000"' 00:13:24.016 18:11:34 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[nsze]=0x100000 00:13:24.016 18:11:34 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:24.016 18:11:34 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:24.016 18:11:34 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:13:24.016 18:11:34 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[ncap]="0x100000"' 00:13:24.016 18:11:34 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[ncap]=0x100000 00:13:24.016 18:11:34 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:24.016 18:11:34 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:24.016 18:11:34 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:13:24.016 18:11:34 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[nuse]="0x100000"' 00:13:24.016 18:11:34 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[nuse]=0x100000 00:13:24.016 18:11:34 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:24.016 18:11:34 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:24.016 18:11:34 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x14 ]] 00:13:24.016 18:11:34 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[nsfeat]="0x14"' 00:13:24.016 18:11:34 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[nsfeat]=0x14 00:13:24.016 18:11:34 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:24.016 18:11:34 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:24.016 18:11:34 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 7 ]] 00:13:24.016 18:11:34 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[nlbaf]="7"' 00:13:24.016 18:11:34 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[nlbaf]=7 00:13:24.016 18:11:34 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:24.016 18:11:34 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:24.016 18:11:34 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x4 ]] 00:13:24.016 18:11:34 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[flbas]="0x4"' 00:13:24.016 18:11:34 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[flbas]=0x4 00:13:24.016 18:11:34 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:24.016 18:11:34 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:24.016 18:11:34 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:13:24.016 18:11:34 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[mc]="0x3"' 00:13:24.016 18:11:34 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[mc]=0x3 00:13:24.016 18:11:34 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:24.016 18:11:34 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:24.016 18:11:34 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x1f ]] 00:13:24.016 18:11:34 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[dpc]="0x1f"' 00:13:24.016 18:11:34 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[dpc]=0x1f 00:13:24.016 18:11:34 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:24.016 18:11:34 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:24.016 18:11:34 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:24.016 18:11:34 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[dps]="0"' 00:13:24.016 18:11:34 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[dps]=0 00:13:24.016 18:11:34 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:24.016 18:11:34 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:24.016 18:11:34 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:24.016 18:11:34 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[nmic]="0"' 00:13:24.016 18:11:34 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[nmic]=0 00:13:24.016 18:11:34 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:24.016 18:11:34 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:24.016 18:11:34 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:24.016 18:11:34 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[rescap]="0"' 00:13:24.016 18:11:34 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[rescap]=0 00:13:24.016 18:11:34 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:24.016 18:11:34 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:24.016 18:11:34 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:24.016 18:11:34 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[fpi]="0"' 00:13:24.016 18:11:34 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[fpi]=0 00:13:24.016 18:11:34 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:24.016 18:11:34 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:24.016 18:11:34 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:13:24.016 18:11:34 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[dlfeat]="1"' 00:13:24.016 18:11:34 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[dlfeat]=1 00:13:24.016 18:11:34 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:24.016 18:11:34 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:24.016 18:11:34 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:24.016 18:11:34 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[nawun]="0"' 00:13:24.016 18:11:34 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[nawun]=0 00:13:24.016 18:11:34 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:24.016 18:11:34 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:24.016 18:11:34 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:24.016 18:11:34 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[nawupf]="0"' 00:13:24.016 18:11:34 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[nawupf]=0 00:13:24.016 18:11:34 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:24.016 18:11:34 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:24.016 18:11:34 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:24.016 18:11:34 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[nacwu]="0"' 00:13:24.016 18:11:34 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[nacwu]=0 00:13:24.016 18:11:34 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:24.016 18:11:34 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:24.016 18:11:34 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:24.016 18:11:34 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[nabsn]="0"' 00:13:24.016 18:11:34 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[nabsn]=0 00:13:24.016 18:11:34 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:24.016 18:11:34 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:24.016 18:11:34 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:24.016 18:11:34 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[nabo]="0"' 00:13:24.016 18:11:34 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[nabo]=0 00:13:24.016 18:11:34 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:24.016 18:11:34 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:24.016 18:11:34 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:24.016 18:11:34 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[nabspf]="0"' 00:13:24.016 18:11:34 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[nabspf]=0 00:13:24.016 18:11:34 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:24.016 18:11:34 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:24.016 18:11:34 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:24.016 18:11:34 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[noiob]="0"' 00:13:24.016 18:11:34 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[noiob]=0 00:13:24.016 18:11:34 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:24.016 18:11:34 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:24.016 18:11:34 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:24.016 18:11:34 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[nvmcap]="0"' 00:13:24.016 18:11:34 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[nvmcap]=0 00:13:24.016 18:11:34 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:24.016 18:11:34 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:24.016 18:11:34 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:24.016 18:11:34 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[npwg]="0"' 00:13:24.016 18:11:34 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[npwg]=0 00:13:24.016 18:11:34 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:24.016 18:11:34 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:24.016 18:11:34 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:24.016 18:11:34 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[npwa]="0"' 00:13:24.016 18:11:34 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[npwa]=0 00:13:24.016 18:11:34 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:24.016 18:11:34 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:24.016 18:11:34 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:24.016 18:11:34 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[npdg]="0"' 00:13:24.016 18:11:34 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[npdg]=0 00:13:24.016 18:11:34 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:24.016 18:11:34 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:24.016 18:11:34 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:24.016 18:11:34 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[npda]="0"' 00:13:24.016 18:11:34 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[npda]=0 00:13:24.016 18:11:34 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:24.016 18:11:34 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:24.016 18:11:34 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:24.016 18:11:34 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[nows]="0"' 00:13:24.016 18:11:34 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[nows]=0 00:13:24.016 18:11:34 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:24.016 18:11:34 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:24.016 18:11:34 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:13:24.016 18:11:34 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[mssrl]="128"' 00:13:24.016 18:11:34 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[mssrl]=128 00:13:24.016 18:11:34 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:24.016 18:11:34 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:24.017 18:11:34 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:13:24.017 18:11:34 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[mcl]="128"' 00:13:24.017 18:11:34 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[mcl]=128 00:13:24.017 18:11:34 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:24.017 18:11:34 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:24.017 18:11:34 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 127 ]] 00:13:24.017 18:11:34 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[msrc]="127"' 00:13:24.017 18:11:34 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[msrc]=127 00:13:24.017 18:11:34 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:24.017 18:11:34 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:24.017 18:11:34 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:24.017 18:11:34 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[nulbaf]="0"' 00:13:24.017 18:11:34 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[nulbaf]=0 00:13:24.017 18:11:34 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:24.017 18:11:34 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:24.017 18:11:34 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:24.017 18:11:34 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[anagrpid]="0"' 00:13:24.017 18:11:34 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[anagrpid]=0 00:13:24.017 18:11:34 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:24.017 18:11:34 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:24.017 18:11:34 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:24.017 18:11:34 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[nsattr]="0"' 00:13:24.017 18:11:34 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[nsattr]=0 00:13:24.017 18:11:34 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:24.017 18:11:34 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:24.017 18:11:34 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:24.017 18:11:34 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[nvmsetid]="0"' 00:13:24.017 18:11:34 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[nvmsetid]=0 00:13:24.017 18:11:34 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:24.017 18:11:34 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:24.017 18:11:34 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:24.017 18:11:34 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[endgid]="0"' 00:13:24.017 18:11:34 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[endgid]=0 00:13:24.017 18:11:34 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:24.017 18:11:34 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:24.017 18:11:34 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 00000000000000000000000000000000 ]] 00:13:24.017 18:11:34 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[nguid]="00000000000000000000000000000000"' 00:13:24.017 18:11:34 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[nguid]=00000000000000000000000000000000 00:13:24.017 18:11:34 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:24.017 18:11:34 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:24.017 18:11:34 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0000000000000000 ]] 00:13:24.017 18:11:34 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[eui64]="0000000000000000"' 00:13:24.017 18:11:34 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[eui64]=0000000000000000 00:13:24.017 18:11:34 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:24.017 18:11:34 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:24.017 18:11:34 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:9 rp:0 ]] 00:13:24.017 18:11:34 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[lbaf0]="ms:0 lbads:9 rp:0 "' 00:13:24.017 18:11:34 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[lbaf0]='ms:0 lbads:9 rp:0 ' 00:13:24.017 18:11:34 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:24.017 18:11:34 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:24.017 18:11:34 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:9 rp:0 ]] 00:13:24.017 18:11:34 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[lbaf1]="ms:8 lbads:9 rp:0 "' 00:13:24.017 18:11:34 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[lbaf1]='ms:8 lbads:9 rp:0 ' 00:13:24.017 18:11:34 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:24.017 18:11:34 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:24.017 18:11:34 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:9 rp:0 ]] 00:13:24.017 18:11:34 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[lbaf2]="ms:16 lbads:9 rp:0 "' 00:13:24.017 18:11:34 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[lbaf2]='ms:16 lbads:9 rp:0 ' 00:13:24.017 18:11:34 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:24.017 18:11:34 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:24.017 18:11:34 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:9 rp:0 ]] 00:13:24.017 18:11:34 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[lbaf3]="ms:64 lbads:9 rp:0 "' 00:13:24.017 18:11:34 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[lbaf3]='ms:64 lbads:9 rp:0 ' 00:13:24.017 18:11:34 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:24.017 18:11:34 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:24.017 18:11:34 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:12 rp:0 (in use) ]] 00:13:24.017 18:11:34 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[lbaf4]="ms:0 lbads:12 rp:0 (in use)"' 00:13:24.017 18:11:34 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[lbaf4]='ms:0 lbads:12 rp:0 (in use)' 00:13:24.017 18:11:34 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:24.017 18:11:34 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:24.017 18:11:34 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:12 rp:0 ]] 00:13:24.017 18:11:34 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[lbaf5]="ms:8 lbads:12 rp:0 "' 00:13:24.017 18:11:34 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[lbaf5]='ms:8 lbads:12 rp:0 ' 00:13:24.017 18:11:34 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:24.017 18:11:34 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:24.017 18:11:34 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:12 rp:0 ]] 00:13:24.017 18:11:34 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[lbaf6]="ms:16 lbads:12 rp:0 "' 00:13:24.017 18:11:34 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[lbaf6]='ms:16 lbads:12 rp:0 ' 00:13:24.017 18:11:34 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:24.017 18:11:34 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:24.017 18:11:34 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:12 rp:0 ]] 00:13:24.017 18:11:34 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[lbaf7]="ms:64 lbads:12 rp:0 "' 00:13:24.017 18:11:34 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[lbaf7]='ms:64 lbads:12 rp:0 ' 00:13:24.017 18:11:34 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:24.017 18:11:34 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:24.017 18:11:34 nvme_scc -- nvme/functions.sh@58 -- # _ctrl_ns[${ns##*n}]=nvme2n3 00:13:24.017 18:11:34 nvme_scc -- nvme/functions.sh@60 -- # ctrls["$ctrl_dev"]=nvme2 00:13:24.017 18:11:34 nvme_scc -- nvme/functions.sh@61 -- # nvmes["$ctrl_dev"]=nvme2_ns 00:13:24.017 18:11:34 nvme_scc -- nvme/functions.sh@62 -- # bdfs["$ctrl_dev"]=0000:00:12.0 00:13:24.017 18:11:34 nvme_scc -- nvme/functions.sh@63 -- # ordered_ctrls[${ctrl_dev/nvme/}]=nvme2 00:13:24.017 18:11:34 nvme_scc -- nvme/functions.sh@47 -- # for ctrl in /sys/class/nvme/nvme* 00:13:24.017 18:11:34 nvme_scc -- nvme/functions.sh@48 -- # [[ -e /sys/class/nvme/nvme3 ]] 00:13:24.017 18:11:34 nvme_scc -- nvme/functions.sh@49 -- # pci=0000:00:13.0 00:13:24.017 18:11:34 nvme_scc -- nvme/functions.sh@50 -- # pci_can_use 0000:00:13.0 00:13:24.017 18:11:34 nvme_scc -- scripts/common.sh@18 -- # local i 00:13:24.017 18:11:34 nvme_scc -- scripts/common.sh@21 -- # [[ =~ 0000:00:13.0 ]] 00:13:24.017 18:11:34 nvme_scc -- scripts/common.sh@25 -- # [[ -z '' ]] 00:13:24.017 18:11:34 nvme_scc -- scripts/common.sh@27 -- # return 0 00:13:24.017 18:11:34 nvme_scc -- nvme/functions.sh@51 -- # ctrl_dev=nvme3 00:13:24.017 18:11:34 nvme_scc -- nvme/functions.sh@52 -- # nvme_get nvme3 id-ctrl /dev/nvme3 00:13:24.017 18:11:34 nvme_scc -- nvme/functions.sh@17 -- # local ref=nvme3 reg val 00:13:24.017 18:11:34 nvme_scc -- nvme/functions.sh@18 -- # shift 00:13:24.017 18:11:34 nvme_scc -- nvme/functions.sh@20 -- # local -gA 'nvme3=()' 00:13:24.017 18:11:34 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:24.017 18:11:34 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:24.017 18:11:34 nvme_scc -- nvme/functions.sh@16 -- # /usr/local/src/nvme-cli/nvme id-ctrl /dev/nvme3 00:13:24.017 18:11:34 nvme_scc -- nvme/functions.sh@22 -- # [[ -n '' ]] 00:13:24.017 18:11:34 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:24.017 18:11:34 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:24.017 18:11:34 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x1b36 ]] 00:13:24.017 18:11:34 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[vid]="0x1b36"' 00:13:24.017 18:11:34 nvme_scc -- nvme/functions.sh@23 -- # nvme3[vid]=0x1b36 00:13:24.017 18:11:34 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:24.017 18:11:34 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:24.017 18:11:34 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x1af4 ]] 00:13:24.017 18:11:34 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[ssvid]="0x1af4"' 00:13:24.017 18:11:34 nvme_scc -- nvme/functions.sh@23 -- # nvme3[ssvid]=0x1af4 00:13:24.017 18:11:34 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:24.017 18:11:34 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:24.017 18:11:34 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 12343 ]] 00:13:24.017 18:11:34 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[sn]="12343 "' 00:13:24.017 18:11:34 nvme_scc -- nvme/functions.sh@23 -- # nvme3[sn]='12343 ' 00:13:24.017 18:11:34 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:24.017 18:11:34 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:24.017 18:11:34 nvme_scc -- nvme/functions.sh@22 -- # [[ -n QEMU NVMe Ctrl ]] 00:13:24.017 18:11:34 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[mn]="QEMU NVMe Ctrl "' 00:13:24.017 18:11:34 nvme_scc -- nvme/functions.sh@23 -- # nvme3[mn]='QEMU NVMe Ctrl ' 00:13:24.017 18:11:34 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:24.017 18:11:34 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:24.017 18:11:34 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 8.0.0 ]] 00:13:24.017 18:11:34 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[fr]="8.0.0 "' 00:13:24.017 18:11:34 nvme_scc -- nvme/functions.sh@23 -- # nvme3[fr]='8.0.0 ' 00:13:24.017 18:11:34 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:24.017 18:11:34 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:24.017 18:11:34 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 6 ]] 00:13:24.017 18:11:34 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[rab]="6"' 00:13:24.017 18:11:34 nvme_scc -- nvme/functions.sh@23 -- # nvme3[rab]=6 00:13:24.017 18:11:34 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:24.017 18:11:34 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:24.017 18:11:34 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 525400 ]] 00:13:24.017 18:11:34 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[ieee]="525400"' 00:13:24.017 18:11:34 nvme_scc -- nvme/functions.sh@23 -- # nvme3[ieee]=525400 00:13:24.017 18:11:34 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:24.017 18:11:34 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:24.017 18:11:34 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x2 ]] 00:13:24.017 18:11:34 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[cmic]="0x2"' 00:13:24.017 18:11:34 nvme_scc -- nvme/functions.sh@23 -- # nvme3[cmic]=0x2 00:13:24.018 18:11:34 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:24.018 18:11:34 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:24.018 18:11:34 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 7 ]] 00:13:24.018 18:11:34 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[mdts]="7"' 00:13:24.018 18:11:34 nvme_scc -- nvme/functions.sh@23 -- # nvme3[mdts]=7 00:13:24.018 18:11:34 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:24.018 18:11:34 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:24.018 18:11:34 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:24.018 18:11:34 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[cntlid]="0"' 00:13:24.018 18:11:34 nvme_scc -- nvme/functions.sh@23 -- # nvme3[cntlid]=0 00:13:24.018 18:11:34 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:24.018 18:11:34 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:24.018 18:11:34 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x10400 ]] 00:13:24.018 18:11:34 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[ver]="0x10400"' 00:13:24.018 18:11:34 nvme_scc -- nvme/functions.sh@23 -- # nvme3[ver]=0x10400 00:13:24.018 18:11:34 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:24.018 18:11:34 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:24.018 18:11:34 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:24.018 18:11:34 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[rtd3r]="0"' 00:13:24.018 18:11:34 nvme_scc -- nvme/functions.sh@23 -- # nvme3[rtd3r]=0 00:13:24.018 18:11:34 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:24.018 18:11:34 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:24.018 18:11:34 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:24.018 18:11:34 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[rtd3e]="0"' 00:13:24.018 18:11:34 nvme_scc -- nvme/functions.sh@23 -- # nvme3[rtd3e]=0 00:13:24.018 18:11:34 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:24.018 18:11:34 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:24.018 18:11:34 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x100 ]] 00:13:24.018 18:11:34 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[oaes]="0x100"' 00:13:24.018 18:11:34 nvme_scc -- nvme/functions.sh@23 -- # nvme3[oaes]=0x100 00:13:24.018 18:11:34 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:24.018 18:11:34 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:24.018 18:11:34 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x88010 ]] 00:13:24.018 18:11:34 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[ctratt]="0x88010"' 00:13:24.018 18:11:34 nvme_scc -- nvme/functions.sh@23 -- # nvme3[ctratt]=0x88010 00:13:24.018 18:11:34 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:24.018 18:11:34 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:24.018 18:11:34 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:24.018 18:11:34 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[rrls]="0"' 00:13:24.018 18:11:34 nvme_scc -- nvme/functions.sh@23 -- # nvme3[rrls]=0 00:13:24.018 18:11:34 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:24.018 18:11:34 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:24.018 18:11:34 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:13:24.018 18:11:34 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[cntrltype]="1"' 00:13:24.018 18:11:34 nvme_scc -- nvme/functions.sh@23 -- # nvme3[cntrltype]=1 00:13:24.018 18:11:34 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:24.018 18:11:34 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:24.018 18:11:34 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 00000000-0000-0000-0000-000000000000 ]] 00:13:24.018 18:11:34 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[fguid]="00000000-0000-0000-0000-000000000000"' 00:13:24.018 18:11:34 nvme_scc -- nvme/functions.sh@23 -- # nvme3[fguid]=00000000-0000-0000-0000-000000000000 00:13:24.018 18:11:34 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:24.018 18:11:34 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:24.018 18:11:34 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:24.018 18:11:34 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[crdt1]="0"' 00:13:24.018 18:11:34 nvme_scc -- nvme/functions.sh@23 -- # nvme3[crdt1]=0 00:13:24.018 18:11:34 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:24.018 18:11:34 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:24.018 18:11:34 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:24.018 18:11:34 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[crdt2]="0"' 00:13:24.018 18:11:34 nvme_scc -- nvme/functions.sh@23 -- # nvme3[crdt2]=0 00:13:24.018 18:11:34 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:24.018 18:11:34 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:24.018 18:11:34 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:24.018 18:11:34 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[crdt3]="0"' 00:13:24.018 18:11:34 nvme_scc -- nvme/functions.sh@23 -- # nvme3[crdt3]=0 00:13:24.018 18:11:34 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:24.018 18:11:34 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:24.018 18:11:34 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:24.018 18:11:34 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[nvmsr]="0"' 00:13:24.018 18:11:34 nvme_scc -- nvme/functions.sh@23 -- # nvme3[nvmsr]=0 00:13:24.018 18:11:34 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:24.018 18:11:34 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:24.018 18:11:34 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:24.018 18:11:34 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[vwci]="0"' 00:13:24.018 18:11:34 nvme_scc -- nvme/functions.sh@23 -- # nvme3[vwci]=0 00:13:24.018 18:11:34 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:24.018 18:11:34 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:24.018 18:11:34 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:24.018 18:11:34 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[mec]="0"' 00:13:24.018 18:11:34 nvme_scc -- nvme/functions.sh@23 -- # nvme3[mec]=0 00:13:24.018 18:11:34 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:24.018 18:11:34 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:24.018 18:11:34 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x12a ]] 00:13:24.018 18:11:34 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[oacs]="0x12a"' 00:13:24.018 18:11:34 nvme_scc -- nvme/functions.sh@23 -- # nvme3[oacs]=0x12a 00:13:24.018 18:11:34 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:24.018 18:11:34 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:24.018 18:11:34 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 3 ]] 00:13:24.018 18:11:34 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[acl]="3"' 00:13:24.018 18:11:34 nvme_scc -- nvme/functions.sh@23 -- # nvme3[acl]=3 00:13:24.018 18:11:34 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:24.018 18:11:34 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:24.018 18:11:34 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 3 ]] 00:13:24.018 18:11:34 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[aerl]="3"' 00:13:24.018 18:11:34 nvme_scc -- nvme/functions.sh@23 -- # nvme3[aerl]=3 00:13:24.018 18:11:34 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:24.018 18:11:34 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:24.018 18:11:34 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:13:24.018 18:11:34 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[frmw]="0x3"' 00:13:24.018 18:11:34 nvme_scc -- nvme/functions.sh@23 -- # nvme3[frmw]=0x3 00:13:24.018 18:11:34 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:24.018 18:11:34 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:24.018 18:11:34 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x7 ]] 00:13:24.018 18:11:34 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[lpa]="0x7"' 00:13:24.018 18:11:34 nvme_scc -- nvme/functions.sh@23 -- # nvme3[lpa]=0x7 00:13:24.018 18:11:34 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:24.018 18:11:34 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:24.018 18:11:34 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:24.018 18:11:34 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[elpe]="0"' 00:13:24.018 18:11:34 nvme_scc -- nvme/functions.sh@23 -- # nvme3[elpe]=0 00:13:24.018 18:11:34 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:24.018 18:11:34 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:24.018 18:11:34 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:24.018 18:11:34 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[npss]="0"' 00:13:24.018 18:11:34 nvme_scc -- nvme/functions.sh@23 -- # nvme3[npss]=0 00:13:24.018 18:11:34 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:24.018 18:11:34 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:24.018 18:11:34 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:24.018 18:11:34 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[avscc]="0"' 00:13:24.018 18:11:34 nvme_scc -- nvme/functions.sh@23 -- # nvme3[avscc]=0 00:13:24.018 18:11:34 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:24.018 18:11:34 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:24.018 18:11:34 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:24.018 18:11:34 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[apsta]="0"' 00:13:24.018 18:11:34 nvme_scc -- nvme/functions.sh@23 -- # nvme3[apsta]=0 00:13:24.018 18:11:34 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:24.018 18:11:34 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:24.018 18:11:34 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 343 ]] 00:13:24.018 18:11:34 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[wctemp]="343"' 00:13:24.018 18:11:34 nvme_scc -- nvme/functions.sh@23 -- # nvme3[wctemp]=343 00:13:24.018 18:11:34 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:24.018 18:11:34 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:24.018 18:11:34 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 373 ]] 00:13:24.018 18:11:34 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[cctemp]="373"' 00:13:24.018 18:11:34 nvme_scc -- nvme/functions.sh@23 -- # nvme3[cctemp]=373 00:13:24.018 18:11:34 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:24.018 18:11:34 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:24.018 18:11:34 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:24.018 18:11:34 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[mtfa]="0"' 00:13:24.018 18:11:34 nvme_scc -- nvme/functions.sh@23 -- # nvme3[mtfa]=0 00:13:24.018 18:11:34 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:24.018 18:11:34 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:24.018 18:11:34 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:24.018 18:11:34 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[hmpre]="0"' 00:13:24.018 18:11:34 nvme_scc -- nvme/functions.sh@23 -- # nvme3[hmpre]=0 00:13:24.018 18:11:34 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:24.018 18:11:34 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:24.018 18:11:34 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:24.018 18:11:34 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[hmmin]="0"' 00:13:24.018 18:11:34 nvme_scc -- nvme/functions.sh@23 -- # nvme3[hmmin]=0 00:13:24.018 18:11:34 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:24.018 18:11:34 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:24.018 18:11:34 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:24.018 18:11:34 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[tnvmcap]="0"' 00:13:24.018 18:11:34 nvme_scc -- nvme/functions.sh@23 -- # nvme3[tnvmcap]=0 00:13:24.018 18:11:34 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:24.018 18:11:34 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:24.018 18:11:34 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:24.019 18:11:34 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[unvmcap]="0"' 00:13:24.019 18:11:34 nvme_scc -- nvme/functions.sh@23 -- # nvme3[unvmcap]=0 00:13:24.019 18:11:34 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:24.019 18:11:34 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:24.019 18:11:34 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:24.019 18:11:34 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[rpmbs]="0"' 00:13:24.019 18:11:34 nvme_scc -- nvme/functions.sh@23 -- # nvme3[rpmbs]=0 00:13:24.019 18:11:34 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:24.019 18:11:34 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:24.019 18:11:34 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:24.019 18:11:34 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[edstt]="0"' 00:13:24.019 18:11:34 nvme_scc -- nvme/functions.sh@23 -- # nvme3[edstt]=0 00:13:24.019 18:11:34 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:24.019 18:11:34 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:24.019 18:11:34 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:24.019 18:11:34 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[dsto]="0"' 00:13:24.019 18:11:34 nvme_scc -- nvme/functions.sh@23 -- # nvme3[dsto]=0 00:13:24.019 18:11:34 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:24.019 18:11:34 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:24.019 18:11:34 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:24.019 18:11:34 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[fwug]="0"' 00:13:24.019 18:11:34 nvme_scc -- nvme/functions.sh@23 -- # nvme3[fwug]=0 00:13:24.019 18:11:34 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:24.019 18:11:34 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:24.019 18:11:34 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:24.019 18:11:34 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[kas]="0"' 00:13:24.019 18:11:34 nvme_scc -- nvme/functions.sh@23 -- # nvme3[kas]=0 00:13:24.019 18:11:34 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:24.019 18:11:34 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:24.019 18:11:34 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:24.019 18:11:34 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[hctma]="0"' 00:13:24.019 18:11:34 nvme_scc -- nvme/functions.sh@23 -- # nvme3[hctma]=0 00:13:24.019 18:11:34 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:24.019 18:11:34 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:24.019 18:11:34 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:24.019 18:11:34 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[mntmt]="0"' 00:13:24.019 18:11:34 nvme_scc -- nvme/functions.sh@23 -- # nvme3[mntmt]=0 00:13:24.019 18:11:34 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:24.019 18:11:34 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:24.019 18:11:34 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:24.019 18:11:34 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[mxtmt]="0"' 00:13:24.019 18:11:34 nvme_scc -- nvme/functions.sh@23 -- # nvme3[mxtmt]=0 00:13:24.019 18:11:34 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:24.019 18:11:34 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:24.019 18:11:34 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:24.019 18:11:34 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[sanicap]="0"' 00:13:24.019 18:11:34 nvme_scc -- nvme/functions.sh@23 -- # nvme3[sanicap]=0 00:13:24.019 18:11:34 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:24.019 18:11:34 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:24.019 18:11:34 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:24.019 18:11:34 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[hmminds]="0"' 00:13:24.019 18:11:34 nvme_scc -- nvme/functions.sh@23 -- # nvme3[hmminds]=0 00:13:24.019 18:11:34 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:24.019 18:11:34 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:24.019 18:11:34 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:24.019 18:11:34 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[hmmaxd]="0"' 00:13:24.019 18:11:34 nvme_scc -- nvme/functions.sh@23 -- # nvme3[hmmaxd]=0 00:13:24.019 18:11:34 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:24.019 18:11:34 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:24.019 18:11:34 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:24.019 18:11:34 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[nsetidmax]="0"' 00:13:24.019 18:11:34 nvme_scc -- nvme/functions.sh@23 -- # nvme3[nsetidmax]=0 00:13:24.019 18:11:34 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:24.019 18:11:34 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:24.019 18:11:34 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:13:24.019 18:11:34 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[endgidmax]="1"' 00:13:24.019 18:11:34 nvme_scc -- nvme/functions.sh@23 -- # nvme3[endgidmax]=1 00:13:24.019 18:11:34 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:24.019 18:11:34 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:24.019 18:11:34 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:24.019 18:11:34 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[anatt]="0"' 00:13:24.019 18:11:34 nvme_scc -- nvme/functions.sh@23 -- # nvme3[anatt]=0 00:13:24.019 18:11:34 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:24.019 18:11:34 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:24.019 18:11:34 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:24.019 18:11:34 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[anacap]="0"' 00:13:24.019 18:11:34 nvme_scc -- nvme/functions.sh@23 -- # nvme3[anacap]=0 00:13:24.019 18:11:34 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:24.019 18:11:34 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:24.019 18:11:34 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:24.019 18:11:34 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[anagrpmax]="0"' 00:13:24.019 18:11:34 nvme_scc -- nvme/functions.sh@23 -- # nvme3[anagrpmax]=0 00:13:24.019 18:11:34 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:24.019 18:11:34 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:24.019 18:11:34 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:24.019 18:11:34 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[nanagrpid]="0"' 00:13:24.019 18:11:34 nvme_scc -- nvme/functions.sh@23 -- # nvme3[nanagrpid]=0 00:13:24.019 18:11:34 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:24.019 18:11:34 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:24.019 18:11:34 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:24.019 18:11:34 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[pels]="0"' 00:13:24.019 18:11:34 nvme_scc -- nvme/functions.sh@23 -- # nvme3[pels]=0 00:13:24.019 18:11:34 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:24.019 18:11:34 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:24.019 18:11:34 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:24.019 18:11:34 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[domainid]="0"' 00:13:24.019 18:11:34 nvme_scc -- nvme/functions.sh@23 -- # nvme3[domainid]=0 00:13:24.019 18:11:34 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:24.019 18:11:34 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:24.019 18:11:34 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:24.019 18:11:34 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[megcap]="0"' 00:13:24.019 18:11:34 nvme_scc -- nvme/functions.sh@23 -- # nvme3[megcap]=0 00:13:24.019 18:11:34 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:24.019 18:11:34 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:24.019 18:11:34 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x66 ]] 00:13:24.019 18:11:34 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[sqes]="0x66"' 00:13:24.019 18:11:34 nvme_scc -- nvme/functions.sh@23 -- # nvme3[sqes]=0x66 00:13:24.019 18:11:34 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:24.019 18:11:34 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:24.019 18:11:34 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x44 ]] 00:13:24.019 18:11:34 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[cqes]="0x44"' 00:13:24.019 18:11:34 nvme_scc -- nvme/functions.sh@23 -- # nvme3[cqes]=0x44 00:13:24.019 18:11:34 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:24.019 18:11:34 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:24.019 18:11:34 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:24.019 18:11:34 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[maxcmd]="0"' 00:13:24.019 18:11:34 nvme_scc -- nvme/functions.sh@23 -- # nvme3[maxcmd]=0 00:13:24.019 18:11:34 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:24.019 18:11:34 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:24.019 18:11:34 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 256 ]] 00:13:24.019 18:11:34 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[nn]="256"' 00:13:24.019 18:11:34 nvme_scc -- nvme/functions.sh@23 -- # nvme3[nn]=256 00:13:24.019 18:11:34 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:24.019 18:11:34 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:24.019 18:11:34 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x15d ]] 00:13:24.019 18:11:34 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[oncs]="0x15d"' 00:13:24.019 18:11:34 nvme_scc -- nvme/functions.sh@23 -- # nvme3[oncs]=0x15d 00:13:24.019 18:11:34 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:24.019 18:11:34 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:24.019 18:11:34 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:24.019 18:11:34 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[fuses]="0"' 00:13:24.019 18:11:34 nvme_scc -- nvme/functions.sh@23 -- # nvme3[fuses]=0 00:13:24.019 18:11:34 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:24.019 18:11:34 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:24.019 18:11:34 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:24.019 18:11:34 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[fna]="0"' 00:13:24.019 18:11:34 nvme_scc -- nvme/functions.sh@23 -- # nvme3[fna]=0 00:13:24.019 18:11:34 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:24.019 18:11:34 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:24.019 18:11:34 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x7 ]] 00:13:24.019 18:11:34 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[vwc]="0x7"' 00:13:24.019 18:11:34 nvme_scc -- nvme/functions.sh@23 -- # nvme3[vwc]=0x7 00:13:24.019 18:11:34 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:24.019 18:11:34 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:24.019 18:11:34 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:24.019 18:11:34 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[awun]="0"' 00:13:24.019 18:11:34 nvme_scc -- nvme/functions.sh@23 -- # nvme3[awun]=0 00:13:24.019 18:11:34 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:24.019 18:11:34 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:24.019 18:11:34 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:24.019 18:11:34 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[awupf]="0"' 00:13:24.019 18:11:34 nvme_scc -- nvme/functions.sh@23 -- # nvme3[awupf]=0 00:13:24.019 18:11:34 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:24.019 18:11:34 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:24.019 18:11:34 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:24.019 18:11:34 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[icsvscc]="0"' 00:13:24.020 18:11:34 nvme_scc -- nvme/functions.sh@23 -- # nvme3[icsvscc]=0 00:13:24.020 18:11:34 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:24.020 18:11:34 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:24.020 18:11:34 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:24.020 18:11:34 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[nwpc]="0"' 00:13:24.020 18:11:34 nvme_scc -- nvme/functions.sh@23 -- # nvme3[nwpc]=0 00:13:24.020 18:11:34 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:24.020 18:11:34 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:24.020 18:11:34 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:24.020 18:11:34 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[acwu]="0"' 00:13:24.020 18:11:34 nvme_scc -- nvme/functions.sh@23 -- # nvme3[acwu]=0 00:13:24.020 18:11:34 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:24.020 18:11:34 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:24.020 18:11:34 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:13:24.020 18:11:34 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[ocfs]="0x3"' 00:13:24.020 18:11:34 nvme_scc -- nvme/functions.sh@23 -- # nvme3[ocfs]=0x3 00:13:24.020 18:11:34 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:24.020 18:11:34 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:24.020 18:11:34 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x1 ]] 00:13:24.020 18:11:34 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[sgls]="0x1"' 00:13:24.020 18:11:34 nvme_scc -- nvme/functions.sh@23 -- # nvme3[sgls]=0x1 00:13:24.020 18:11:34 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:24.020 18:11:34 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:24.020 18:11:34 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:24.020 18:11:34 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[mnan]="0"' 00:13:24.020 18:11:34 nvme_scc -- nvme/functions.sh@23 -- # nvme3[mnan]=0 00:13:24.020 18:11:34 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:24.020 18:11:34 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:24.020 18:11:34 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:24.020 18:11:34 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[maxdna]="0"' 00:13:24.020 18:11:34 nvme_scc -- nvme/functions.sh@23 -- # nvme3[maxdna]=0 00:13:24.020 18:11:34 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:24.020 18:11:34 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:24.020 18:11:34 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:24.020 18:11:34 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[maxcna]="0"' 00:13:24.020 18:11:34 nvme_scc -- nvme/functions.sh@23 -- # nvme3[maxcna]=0 00:13:24.020 18:11:34 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:24.020 18:11:34 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:24.020 18:11:34 nvme_scc -- nvme/functions.sh@22 -- # [[ -n nqn.2019-08.org.qemu:fdp-subsys3 ]] 00:13:24.020 18:11:34 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[subnqn]="nqn.2019-08.org.qemu:fdp-subsys3"' 00:13:24.020 18:11:34 nvme_scc -- nvme/functions.sh@23 -- # nvme3[subnqn]=nqn.2019-08.org.qemu:fdp-subsys3 00:13:24.020 18:11:34 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:24.020 18:11:34 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:24.020 18:11:34 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:24.020 18:11:34 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[ioccsz]="0"' 00:13:24.020 18:11:34 nvme_scc -- nvme/functions.sh@23 -- # nvme3[ioccsz]=0 00:13:24.020 18:11:34 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:24.020 18:11:34 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:24.020 18:11:34 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:24.020 18:11:34 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[iorcsz]="0"' 00:13:24.020 18:11:34 nvme_scc -- nvme/functions.sh@23 -- # nvme3[iorcsz]=0 00:13:24.020 18:11:34 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:24.020 18:11:34 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:24.020 18:11:34 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:24.020 18:11:34 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[icdoff]="0"' 00:13:24.020 18:11:34 nvme_scc -- nvme/functions.sh@23 -- # nvme3[icdoff]=0 00:13:24.020 18:11:34 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:24.020 18:11:34 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:24.020 18:11:34 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:24.020 18:11:34 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[fcatt]="0"' 00:13:24.020 18:11:34 nvme_scc -- nvme/functions.sh@23 -- # nvme3[fcatt]=0 00:13:24.020 18:11:34 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:24.020 18:11:34 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:24.020 18:11:34 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:24.020 18:11:34 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[msdbd]="0"' 00:13:24.020 18:11:34 nvme_scc -- nvme/functions.sh@23 -- # nvme3[msdbd]=0 00:13:24.020 18:11:34 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:24.020 18:11:34 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:24.020 18:11:34 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:24.020 18:11:34 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[ofcs]="0"' 00:13:24.020 18:11:34 nvme_scc -- nvme/functions.sh@23 -- # nvme3[ofcs]=0 00:13:24.020 18:11:34 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:24.020 18:11:34 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:24.020 18:11:34 nvme_scc -- nvme/functions.sh@22 -- # [[ -n mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0 ]] 00:13:24.020 18:11:34 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[ps0]="mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0"' 00:13:24.020 18:11:34 nvme_scc -- nvme/functions.sh@23 -- # nvme3[ps0]='mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0' 00:13:24.020 18:11:34 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:24.020 18:11:34 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:24.020 18:11:34 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 rwl:0 idle_power:- active_power:- ]] 00:13:24.020 18:11:34 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[rwt]="0 rwl:0 idle_power:- active_power:-"' 00:13:24.020 18:11:34 nvme_scc -- nvme/functions.sh@23 -- # nvme3[rwt]='0 rwl:0 idle_power:- active_power:-' 00:13:24.020 18:11:34 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:24.020 18:11:34 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:24.020 18:11:34 nvme_scc -- nvme/functions.sh@22 -- # [[ -n - ]] 00:13:24.020 18:11:34 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[active_power_workload]="-"' 00:13:24.020 18:11:34 nvme_scc -- nvme/functions.sh@23 -- # nvme3[active_power_workload]=- 00:13:24.020 18:11:34 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:24.020 18:11:34 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:24.020 18:11:34 nvme_scc -- nvme/functions.sh@53 -- # local -n _ctrl_ns=nvme3_ns 00:13:24.020 18:11:34 nvme_scc -- nvme/functions.sh@60 -- # ctrls["$ctrl_dev"]=nvme3 00:13:24.020 18:11:34 nvme_scc -- nvme/functions.sh@61 -- # nvmes["$ctrl_dev"]=nvme3_ns 00:13:24.020 18:11:34 nvme_scc -- nvme/functions.sh@62 -- # bdfs["$ctrl_dev"]=0000:00:13.0 00:13:24.020 18:11:34 nvme_scc -- nvme/functions.sh@63 -- # ordered_ctrls[${ctrl_dev/nvme/}]=nvme3 00:13:24.020 18:11:34 nvme_scc -- nvme/functions.sh@65 -- # (( 4 > 0 )) 00:13:24.020 18:11:34 nvme_scc -- nvme/nvme_scc.sh@17 -- # get_ctrl_with_feature scc 00:13:24.020 18:11:34 nvme_scc -- nvme/functions.sh@204 -- # local _ctrls feature=scc 00:13:24.020 18:11:34 nvme_scc -- nvme/functions.sh@206 -- # _ctrls=($(get_ctrls_with_feature "$feature")) 00:13:24.020 18:11:34 nvme_scc -- nvme/functions.sh@206 -- # get_ctrls_with_feature scc 00:13:24.020 18:11:34 nvme_scc -- nvme/functions.sh@192 -- # (( 4 == 0 )) 00:13:24.020 18:11:34 nvme_scc -- nvme/functions.sh@194 -- # local ctrl feature=scc 00:13:24.020 18:11:34 nvme_scc -- nvme/functions.sh@196 -- # type -t ctrl_has_scc 00:13:24.020 18:11:34 nvme_scc -- nvme/functions.sh@196 -- # [[ function == function ]] 00:13:24.020 18:11:34 nvme_scc -- nvme/functions.sh@198 -- # for ctrl in "${!ctrls[@]}" 00:13:24.020 18:11:34 nvme_scc -- nvme/functions.sh@199 -- # ctrl_has_scc nvme1 00:13:24.020 18:11:34 nvme_scc -- nvme/functions.sh@184 -- # local ctrl=nvme1 oncs 00:13:24.020 18:11:34 nvme_scc -- nvme/functions.sh@186 -- # get_oncs nvme1 00:13:24.020 18:11:34 nvme_scc -- nvme/functions.sh@171 -- # local ctrl=nvme1 00:13:24.020 18:11:34 nvme_scc -- nvme/functions.sh@172 -- # get_nvme_ctrl_feature nvme1 oncs 00:13:24.020 18:11:34 nvme_scc -- nvme/functions.sh@69 -- # local ctrl=nvme1 reg=oncs 00:13:24.020 18:11:34 nvme_scc -- nvme/functions.sh@71 -- # [[ -n nvme1 ]] 00:13:24.020 18:11:34 nvme_scc -- nvme/functions.sh@73 -- # local -n _ctrl=nvme1 00:13:24.020 18:11:34 nvme_scc -- nvme/functions.sh@75 -- # [[ -n 0x15d ]] 00:13:24.020 18:11:34 nvme_scc -- nvme/functions.sh@76 -- # echo 0x15d 00:13:24.020 18:11:34 nvme_scc -- nvme/functions.sh@186 -- # oncs=0x15d 00:13:24.020 18:11:34 nvme_scc -- nvme/functions.sh@188 -- # (( oncs & 1 << 8 )) 00:13:24.020 18:11:34 nvme_scc -- nvme/functions.sh@199 -- # echo nvme1 00:13:24.020 18:11:34 nvme_scc -- nvme/functions.sh@198 -- # for ctrl in "${!ctrls[@]}" 00:13:24.020 18:11:34 nvme_scc -- nvme/functions.sh@199 -- # ctrl_has_scc nvme0 00:13:24.020 18:11:34 nvme_scc -- nvme/functions.sh@184 -- # local ctrl=nvme0 oncs 00:13:24.020 18:11:34 nvme_scc -- nvme/functions.sh@186 -- # get_oncs nvme0 00:13:24.020 18:11:34 nvme_scc -- nvme/functions.sh@171 -- # local ctrl=nvme0 00:13:24.020 18:11:34 nvme_scc -- nvme/functions.sh@172 -- # get_nvme_ctrl_feature nvme0 oncs 00:13:24.020 18:11:34 nvme_scc -- nvme/functions.sh@69 -- # local ctrl=nvme0 reg=oncs 00:13:24.020 18:11:34 nvme_scc -- nvme/functions.sh@71 -- # [[ -n nvme0 ]] 00:13:24.020 18:11:34 nvme_scc -- nvme/functions.sh@73 -- # local -n _ctrl=nvme0 00:13:24.020 18:11:34 nvme_scc -- nvme/functions.sh@75 -- # [[ -n 0x15d ]] 00:13:24.020 18:11:34 nvme_scc -- nvme/functions.sh@76 -- # echo 0x15d 00:13:24.020 18:11:34 nvme_scc -- nvme/functions.sh@186 -- # oncs=0x15d 00:13:24.020 18:11:34 nvme_scc -- nvme/functions.sh@188 -- # (( oncs & 1 << 8 )) 00:13:24.020 18:11:34 nvme_scc -- nvme/functions.sh@199 -- # echo nvme0 00:13:24.020 18:11:34 nvme_scc -- nvme/functions.sh@198 -- # for ctrl in "${!ctrls[@]}" 00:13:24.020 18:11:34 nvme_scc -- nvme/functions.sh@199 -- # ctrl_has_scc nvme3 00:13:24.020 18:11:34 nvme_scc -- nvme/functions.sh@184 -- # local ctrl=nvme3 oncs 00:13:24.020 18:11:34 nvme_scc -- nvme/functions.sh@186 -- # get_oncs nvme3 00:13:24.021 18:11:34 nvme_scc -- nvme/functions.sh@171 -- # local ctrl=nvme3 00:13:24.021 18:11:34 nvme_scc -- nvme/functions.sh@172 -- # get_nvme_ctrl_feature nvme3 oncs 00:13:24.021 18:11:34 nvme_scc -- nvme/functions.sh@69 -- # local ctrl=nvme3 reg=oncs 00:13:24.021 18:11:34 nvme_scc -- nvme/functions.sh@71 -- # [[ -n nvme3 ]] 00:13:24.021 18:11:34 nvme_scc -- nvme/functions.sh@73 -- # local -n _ctrl=nvme3 00:13:24.021 18:11:34 nvme_scc -- nvme/functions.sh@75 -- # [[ -n 0x15d ]] 00:13:24.021 18:11:34 nvme_scc -- nvme/functions.sh@76 -- # echo 0x15d 00:13:24.021 18:11:34 nvme_scc -- nvme/functions.sh@186 -- # oncs=0x15d 00:13:24.021 18:11:34 nvme_scc -- nvme/functions.sh@188 -- # (( oncs & 1 << 8 )) 00:13:24.021 18:11:34 nvme_scc -- nvme/functions.sh@199 -- # echo nvme3 00:13:24.021 18:11:34 nvme_scc -- nvme/functions.sh@198 -- # for ctrl in "${!ctrls[@]}" 00:13:24.021 18:11:34 nvme_scc -- nvme/functions.sh@199 -- # ctrl_has_scc nvme2 00:13:24.021 18:11:34 nvme_scc -- nvme/functions.sh@184 -- # local ctrl=nvme2 oncs 00:13:24.021 18:11:34 nvme_scc -- nvme/functions.sh@186 -- # get_oncs nvme2 00:13:24.021 18:11:34 nvme_scc -- nvme/functions.sh@171 -- # local ctrl=nvme2 00:13:24.021 18:11:34 nvme_scc -- nvme/functions.sh@172 -- # get_nvme_ctrl_feature nvme2 oncs 00:13:24.021 18:11:34 nvme_scc -- nvme/functions.sh@69 -- # local ctrl=nvme2 reg=oncs 00:13:24.021 18:11:34 nvme_scc -- nvme/functions.sh@71 -- # [[ -n nvme2 ]] 00:13:24.021 18:11:34 nvme_scc -- nvme/functions.sh@73 -- # local -n _ctrl=nvme2 00:13:24.021 18:11:34 nvme_scc -- nvme/functions.sh@75 -- # [[ -n 0x15d ]] 00:13:24.021 18:11:34 nvme_scc -- nvme/functions.sh@76 -- # echo 0x15d 00:13:24.021 18:11:34 nvme_scc -- nvme/functions.sh@186 -- # oncs=0x15d 00:13:24.021 18:11:34 nvme_scc -- nvme/functions.sh@188 -- # (( oncs & 1 << 8 )) 00:13:24.021 18:11:34 nvme_scc -- nvme/functions.sh@199 -- # echo nvme2 00:13:24.279 18:11:34 nvme_scc -- nvme/functions.sh@207 -- # (( 4 > 0 )) 00:13:24.279 18:11:34 nvme_scc -- nvme/functions.sh@208 -- # echo nvme1 00:13:24.279 18:11:34 nvme_scc -- nvme/functions.sh@209 -- # return 0 00:13:24.279 18:11:34 nvme_scc -- nvme/nvme_scc.sh@17 -- # ctrl=nvme1 00:13:24.279 18:11:34 nvme_scc -- nvme/nvme_scc.sh@17 -- # bdf=0000:00:10.0 00:13:24.279 18:11:34 nvme_scc -- nvme/nvme_scc.sh@19 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:13:24.846 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:13:25.779 0000:00:10.0 (1b36 0010): nvme -> uio_pci_generic 00:13:25.779 0000:00:13.0 (1b36 0010): nvme -> uio_pci_generic 00:13:25.779 0000:00:11.0 (1b36 0010): nvme -> uio_pci_generic 00:13:25.779 0000:00:12.0 (1b36 0010): nvme -> uio_pci_generic 00:13:25.779 18:11:36 nvme_scc -- nvme/nvme_scc.sh@21 -- # run_test nvme_simple_copy /home/vagrant/spdk_repo/spdk/test/nvme/simple_copy/simple_copy -r 'trtype:pcie traddr:0000:00:10.0' 00:13:25.779 18:11:36 nvme_scc -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:13:25.779 18:11:36 nvme_scc -- common/autotest_common.sh@1111 -- # xtrace_disable 00:13:25.779 18:11:36 nvme_scc -- common/autotest_common.sh@10 -- # set +x 00:13:25.779 ************************************ 00:13:25.779 START TEST nvme_simple_copy 00:13:25.779 ************************************ 00:13:25.779 18:11:36 nvme_scc.nvme_simple_copy -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/nvme/simple_copy/simple_copy -r 'trtype:pcie traddr:0000:00:10.0' 00:13:26.037 Initializing NVMe Controllers 00:13:26.037 Attaching to 0000:00:10.0 00:13:26.037 Controller supports SCC. Attached to 0000:00:10.0 00:13:26.037 Namespace ID: 1 size: 6GB 00:13:26.037 Initialization complete. 00:13:26.037 00:13:26.037 Controller QEMU NVMe Ctrl (12340 ) 00:13:26.037 Controller PCI vendor:6966 PCI subsystem vendor:6900 00:13:26.037 Namespace Block Size:4096 00:13:26.037 Writing LBAs 0 to 63 with Random Data 00:13:26.037 Copied LBAs from 0 - 63 to the Destination LBA 256 00:13:26.037 LBAs matching Written Data: 64 00:13:26.037 00:13:26.037 real 0m0.314s 00:13:26.037 user 0m0.109s 00:13:26.037 sys 0m0.104s 00:13:26.037 18:11:36 nvme_scc.nvme_simple_copy -- common/autotest_common.sh@1130 -- # xtrace_disable 00:13:26.037 ************************************ 00:13:26.037 END TEST nvme_simple_copy 00:13:26.037 ************************************ 00:13:26.037 18:11:36 nvme_scc.nvme_simple_copy -- common/autotest_common.sh@10 -- # set +x 00:13:26.295 00:13:26.295 real 0m9.094s 00:13:26.295 user 0m1.662s 00:13:26.295 sys 0m2.494s 00:13:26.295 18:11:36 nvme_scc -- common/autotest_common.sh@1130 -- # xtrace_disable 00:13:26.295 18:11:36 nvme_scc -- common/autotest_common.sh@10 -- # set +x 00:13:26.295 ************************************ 00:13:26.295 END TEST nvme_scc 00:13:26.295 ************************************ 00:13:26.295 18:11:36 -- spdk/autotest.sh@219 -- # [[ 0 -eq 1 ]] 00:13:26.295 18:11:36 -- spdk/autotest.sh@222 -- # [[ 0 -eq 1 ]] 00:13:26.295 18:11:36 -- spdk/autotest.sh@225 -- # [[ '' -eq 1 ]] 00:13:26.295 18:11:36 -- spdk/autotest.sh@228 -- # [[ 1 -eq 1 ]] 00:13:26.295 18:11:36 -- spdk/autotest.sh@229 -- # run_test nvme_fdp test/nvme/nvme_fdp.sh 00:13:26.295 18:11:36 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:13:26.295 18:11:36 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:13:26.295 18:11:36 -- common/autotest_common.sh@10 -- # set +x 00:13:26.295 ************************************ 00:13:26.295 START TEST nvme_fdp 00:13:26.295 ************************************ 00:13:26.295 18:11:36 nvme_fdp -- common/autotest_common.sh@1129 -- # test/nvme/nvme_fdp.sh 00:13:26.295 * Looking for test storage... 00:13:26.295 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvme 00:13:26.295 18:11:36 nvme_fdp -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:13:26.295 18:11:36 nvme_fdp -- common/autotest_common.sh@1711 -- # lcov --version 00:13:26.295 18:11:36 nvme_fdp -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:13:26.592 18:11:36 nvme_fdp -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:13:26.592 18:11:36 nvme_fdp -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:13:26.592 18:11:36 nvme_fdp -- scripts/common.sh@333 -- # local ver1 ver1_l 00:13:26.592 18:11:36 nvme_fdp -- scripts/common.sh@334 -- # local ver2 ver2_l 00:13:26.592 18:11:36 nvme_fdp -- scripts/common.sh@336 -- # IFS=.-: 00:13:26.592 18:11:36 nvme_fdp -- scripts/common.sh@336 -- # read -ra ver1 00:13:26.592 18:11:36 nvme_fdp -- scripts/common.sh@337 -- # IFS=.-: 00:13:26.592 18:11:36 nvme_fdp -- scripts/common.sh@337 -- # read -ra ver2 00:13:26.592 18:11:36 nvme_fdp -- scripts/common.sh@338 -- # local 'op=<' 00:13:26.592 18:11:36 nvme_fdp -- scripts/common.sh@340 -- # ver1_l=2 00:13:26.592 18:11:36 nvme_fdp -- scripts/common.sh@341 -- # ver2_l=1 00:13:26.592 18:11:36 nvme_fdp -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:13:26.592 18:11:36 nvme_fdp -- scripts/common.sh@344 -- # case "$op" in 00:13:26.592 18:11:36 nvme_fdp -- scripts/common.sh@345 -- # : 1 00:13:26.592 18:11:36 nvme_fdp -- scripts/common.sh@364 -- # (( v = 0 )) 00:13:26.592 18:11:36 nvme_fdp -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:13:26.592 18:11:36 nvme_fdp -- scripts/common.sh@365 -- # decimal 1 00:13:26.592 18:11:36 nvme_fdp -- scripts/common.sh@353 -- # local d=1 00:13:26.592 18:11:36 nvme_fdp -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:13:26.592 18:11:36 nvme_fdp -- scripts/common.sh@355 -- # echo 1 00:13:26.592 18:11:36 nvme_fdp -- scripts/common.sh@365 -- # ver1[v]=1 00:13:26.592 18:11:36 nvme_fdp -- scripts/common.sh@366 -- # decimal 2 00:13:26.592 18:11:36 nvme_fdp -- scripts/common.sh@353 -- # local d=2 00:13:26.592 18:11:36 nvme_fdp -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:13:26.592 18:11:36 nvme_fdp -- scripts/common.sh@355 -- # echo 2 00:13:26.592 18:11:36 nvme_fdp -- scripts/common.sh@366 -- # ver2[v]=2 00:13:26.592 18:11:36 nvme_fdp -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:13:26.592 18:11:36 nvme_fdp -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:13:26.592 18:11:36 nvme_fdp -- scripts/common.sh@368 -- # return 0 00:13:26.592 18:11:36 nvme_fdp -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:13:26.592 18:11:36 nvme_fdp -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:13:26.592 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:13:26.592 --rc genhtml_branch_coverage=1 00:13:26.592 --rc genhtml_function_coverage=1 00:13:26.592 --rc genhtml_legend=1 00:13:26.592 --rc geninfo_all_blocks=1 00:13:26.592 --rc geninfo_unexecuted_blocks=1 00:13:26.592 00:13:26.592 ' 00:13:26.592 18:11:36 nvme_fdp -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:13:26.592 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:13:26.592 --rc genhtml_branch_coverage=1 00:13:26.592 --rc genhtml_function_coverage=1 00:13:26.592 --rc genhtml_legend=1 00:13:26.592 --rc geninfo_all_blocks=1 00:13:26.592 --rc geninfo_unexecuted_blocks=1 00:13:26.592 00:13:26.592 ' 00:13:26.592 18:11:36 nvme_fdp -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:13:26.592 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:13:26.592 --rc genhtml_branch_coverage=1 00:13:26.592 --rc genhtml_function_coverage=1 00:13:26.592 --rc genhtml_legend=1 00:13:26.592 --rc geninfo_all_blocks=1 00:13:26.592 --rc geninfo_unexecuted_blocks=1 00:13:26.592 00:13:26.592 ' 00:13:26.593 18:11:36 nvme_fdp -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:13:26.593 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:13:26.593 --rc genhtml_branch_coverage=1 00:13:26.593 --rc genhtml_function_coverage=1 00:13:26.593 --rc genhtml_legend=1 00:13:26.593 --rc geninfo_all_blocks=1 00:13:26.593 --rc geninfo_unexecuted_blocks=1 00:13:26.593 00:13:26.593 ' 00:13:26.593 18:11:36 nvme_fdp -- cuse/common.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/common/nvme/functions.sh 00:13:26.593 18:11:36 nvme_fdp -- nvme/functions.sh@7 -- # dirname /home/vagrant/spdk_repo/spdk/test/common/nvme/functions.sh 00:13:26.593 18:11:36 nvme_fdp -- nvme/functions.sh@7 -- # readlink -f /home/vagrant/spdk_repo/spdk/test/common/nvme/../../../ 00:13:26.593 18:11:36 nvme_fdp -- nvme/functions.sh@7 -- # rootdir=/home/vagrant/spdk_repo/spdk 00:13:26.593 18:11:36 nvme_fdp -- nvme/functions.sh@8 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:13:26.593 18:11:36 nvme_fdp -- scripts/common.sh@15 -- # shopt -s extglob 00:13:26.593 18:11:36 nvme_fdp -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:13:26.593 18:11:36 nvme_fdp -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:13:26.593 18:11:36 nvme_fdp -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:13:26.593 18:11:36 nvme_fdp -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:26.593 18:11:36 nvme_fdp -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:26.593 18:11:36 nvme_fdp -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:26.593 18:11:36 nvme_fdp -- paths/export.sh@5 -- # export PATH 00:13:26.593 18:11:36 nvme_fdp -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:26.593 18:11:36 nvme_fdp -- nvme/functions.sh@10 -- # ctrls=() 00:13:26.593 18:11:36 nvme_fdp -- nvme/functions.sh@10 -- # declare -A ctrls 00:13:26.593 18:11:36 nvme_fdp -- nvme/functions.sh@11 -- # nvmes=() 00:13:26.593 18:11:36 nvme_fdp -- nvme/functions.sh@11 -- # declare -A nvmes 00:13:26.593 18:11:36 nvme_fdp -- nvme/functions.sh@12 -- # bdfs=() 00:13:26.593 18:11:36 nvme_fdp -- nvme/functions.sh@12 -- # declare -A bdfs 00:13:26.593 18:11:36 nvme_fdp -- nvme/functions.sh@13 -- # ordered_ctrls=() 00:13:26.593 18:11:36 nvme_fdp -- nvme/functions.sh@13 -- # declare -a ordered_ctrls 00:13:26.593 18:11:36 nvme_fdp -- nvme/functions.sh@14 -- # nvme_name= 00:13:26.593 18:11:36 nvme_fdp -- cuse/common.sh@11 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:13:26.593 18:11:36 nvme_fdp -- nvme/nvme_fdp.sh@10 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:13:27.160 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:13:27.417 Waiting for block devices as requested 00:13:27.417 0000:00:11.0 (1b36 0010): uio_pci_generic -> nvme 00:13:27.417 0000:00:10.0 (1b36 0010): uio_pci_generic -> nvme 00:13:27.417 0000:00:12.0 (1b36 0010): uio_pci_generic -> nvme 00:13:27.675 0000:00:13.0 (1b36 0010): uio_pci_generic -> nvme 00:13:32.946 * Events for some block/disk devices (0000:00:13.0) were not caught, they may be missing 00:13:32.946 18:11:43 nvme_fdp -- nvme/nvme_fdp.sh@12 -- # scan_nvme_ctrls 00:13:32.946 18:11:43 nvme_fdp -- nvme/functions.sh@45 -- # local ctrl ctrl_dev reg val ns pci 00:13:32.946 18:11:43 nvme_fdp -- nvme/functions.sh@47 -- # for ctrl in /sys/class/nvme/nvme* 00:13:32.946 18:11:43 nvme_fdp -- nvme/functions.sh@48 -- # [[ -e /sys/class/nvme/nvme0 ]] 00:13:32.946 18:11:43 nvme_fdp -- nvme/functions.sh@49 -- # pci=0000:00:11.0 00:13:32.946 18:11:43 nvme_fdp -- nvme/functions.sh@50 -- # pci_can_use 0000:00:11.0 00:13:32.946 18:11:43 nvme_fdp -- scripts/common.sh@18 -- # local i 00:13:32.946 18:11:43 nvme_fdp -- scripts/common.sh@21 -- # [[ =~ 0000:00:11.0 ]] 00:13:32.946 18:11:43 nvme_fdp -- scripts/common.sh@25 -- # [[ -z '' ]] 00:13:32.946 18:11:43 nvme_fdp -- scripts/common.sh@27 -- # return 0 00:13:32.946 18:11:43 nvme_fdp -- nvme/functions.sh@51 -- # ctrl_dev=nvme0 00:13:32.946 18:11:43 nvme_fdp -- nvme/functions.sh@52 -- # nvme_get nvme0 id-ctrl /dev/nvme0 00:13:32.946 18:11:43 nvme_fdp -- nvme/functions.sh@17 -- # local ref=nvme0 reg val 00:13:32.946 18:11:43 nvme_fdp -- nvme/functions.sh@18 -- # shift 00:13:32.946 18:11:43 nvme_fdp -- nvme/functions.sh@20 -- # local -gA 'nvme0=()' 00:13:32.946 18:11:43 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:32.946 18:11:43 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:32.946 18:11:43 nvme_fdp -- nvme/functions.sh@16 -- # /usr/local/src/nvme-cli/nvme id-ctrl /dev/nvme0 00:13:32.946 18:11:43 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n '' ]] 00:13:32.946 18:11:43 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:32.946 18:11:43 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:32.946 18:11:43 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x1b36 ]] 00:13:32.946 18:11:43 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[vid]="0x1b36"' 00:13:32.946 18:11:43 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[vid]=0x1b36 00:13:32.946 18:11:43 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:32.946 18:11:43 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:32.946 18:11:43 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x1af4 ]] 00:13:32.946 18:11:43 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[ssvid]="0x1af4"' 00:13:32.946 18:11:43 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[ssvid]=0x1af4 00:13:32.946 18:11:43 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:32.946 18:11:43 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:32.946 18:11:43 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 12341 ]] 00:13:32.946 18:11:43 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[sn]="12341 "' 00:13:32.946 18:11:43 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[sn]='12341 ' 00:13:32.946 18:11:43 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:32.946 18:11:43 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:32.946 18:11:43 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n QEMU NVMe Ctrl ]] 00:13:32.946 18:11:43 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[mn]="QEMU NVMe Ctrl "' 00:13:32.946 18:11:43 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[mn]='QEMU NVMe Ctrl ' 00:13:32.946 18:11:43 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:32.946 18:11:43 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:32.946 18:11:43 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 8.0.0 ]] 00:13:32.946 18:11:43 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[fr]="8.0.0 "' 00:13:32.946 18:11:43 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[fr]='8.0.0 ' 00:13:32.946 18:11:43 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:32.946 18:11:43 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:32.946 18:11:43 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 6 ]] 00:13:32.946 18:11:43 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[rab]="6"' 00:13:32.946 18:11:43 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[rab]=6 00:13:32.946 18:11:43 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:32.946 18:11:43 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:32.946 18:11:43 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 525400 ]] 00:13:32.946 18:11:43 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[ieee]="525400"' 00:13:32.946 18:11:43 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[ieee]=525400 00:13:32.946 18:11:43 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:32.946 18:11:43 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:32.946 18:11:43 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:32.946 18:11:43 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[cmic]="0"' 00:13:32.946 18:11:43 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[cmic]=0 00:13:32.946 18:11:43 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:32.946 18:11:43 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:32.946 18:11:43 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 7 ]] 00:13:32.946 18:11:43 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[mdts]="7"' 00:13:32.946 18:11:43 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[mdts]=7 00:13:32.946 18:11:43 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:32.946 18:11:43 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:32.946 18:11:43 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:32.946 18:11:43 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[cntlid]="0"' 00:13:32.946 18:11:43 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[cntlid]=0 00:13:32.946 18:11:43 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:32.946 18:11:43 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:32.946 18:11:43 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x10400 ]] 00:13:32.946 18:11:43 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[ver]="0x10400"' 00:13:32.946 18:11:43 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[ver]=0x10400 00:13:32.946 18:11:43 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:32.946 18:11:43 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:32.947 18:11:43 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:32.947 18:11:43 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[rtd3r]="0"' 00:13:32.947 18:11:43 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[rtd3r]=0 00:13:32.947 18:11:43 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:32.947 18:11:43 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:32.947 18:11:43 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:32.947 18:11:43 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[rtd3e]="0"' 00:13:32.947 18:11:43 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[rtd3e]=0 00:13:32.947 18:11:43 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:32.947 18:11:43 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:32.947 18:11:43 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x100 ]] 00:13:32.947 18:11:43 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[oaes]="0x100"' 00:13:32.947 18:11:43 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[oaes]=0x100 00:13:32.947 18:11:43 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:32.947 18:11:43 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:32.947 18:11:43 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x8000 ]] 00:13:32.947 18:11:43 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[ctratt]="0x8000"' 00:13:32.947 18:11:43 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[ctratt]=0x8000 00:13:32.947 18:11:43 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:32.947 18:11:43 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:32.947 18:11:43 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:32.947 18:11:43 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[rrls]="0"' 00:13:32.947 18:11:43 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[rrls]=0 00:13:32.947 18:11:43 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:32.947 18:11:43 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:32.947 18:11:43 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:13:32.947 18:11:43 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[cntrltype]="1"' 00:13:32.947 18:11:43 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[cntrltype]=1 00:13:32.947 18:11:43 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:32.947 18:11:43 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:32.947 18:11:43 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 00000000-0000-0000-0000-000000000000 ]] 00:13:32.947 18:11:43 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[fguid]="00000000-0000-0000-0000-000000000000"' 00:13:32.947 18:11:43 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[fguid]=00000000-0000-0000-0000-000000000000 00:13:32.947 18:11:43 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:32.947 18:11:43 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:32.947 18:11:43 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:32.947 18:11:43 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[crdt1]="0"' 00:13:32.947 18:11:43 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[crdt1]=0 00:13:32.947 18:11:43 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:32.947 18:11:43 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:32.947 18:11:43 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:32.947 18:11:43 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[crdt2]="0"' 00:13:32.947 18:11:43 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[crdt2]=0 00:13:32.947 18:11:43 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:32.947 18:11:43 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:32.947 18:11:43 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:32.947 18:11:43 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[crdt3]="0"' 00:13:32.947 18:11:43 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[crdt3]=0 00:13:32.947 18:11:43 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:32.947 18:11:43 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:32.947 18:11:43 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:32.947 18:11:43 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[nvmsr]="0"' 00:13:32.947 18:11:43 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[nvmsr]=0 00:13:32.947 18:11:43 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:32.947 18:11:43 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:32.947 18:11:43 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:32.947 18:11:43 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[vwci]="0"' 00:13:32.947 18:11:43 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[vwci]=0 00:13:32.947 18:11:43 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:32.947 18:11:43 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:32.947 18:11:43 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:32.947 18:11:43 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[mec]="0"' 00:13:32.947 18:11:43 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[mec]=0 00:13:32.947 18:11:43 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:32.947 18:11:43 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:32.947 18:11:43 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x12a ]] 00:13:32.947 18:11:43 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[oacs]="0x12a"' 00:13:32.947 18:11:43 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[oacs]=0x12a 00:13:32.947 18:11:43 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:32.947 18:11:43 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:32.947 18:11:43 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 3 ]] 00:13:32.947 18:11:43 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[acl]="3"' 00:13:32.947 18:11:43 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[acl]=3 00:13:32.947 18:11:43 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:32.947 18:11:43 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:32.947 18:11:43 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 3 ]] 00:13:32.947 18:11:43 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[aerl]="3"' 00:13:32.947 18:11:43 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[aerl]=3 00:13:32.947 18:11:43 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:32.947 18:11:43 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:32.947 18:11:43 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:13:32.947 18:11:43 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[frmw]="0x3"' 00:13:32.947 18:11:43 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[frmw]=0x3 00:13:32.947 18:11:43 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:32.947 18:11:43 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:32.947 18:11:43 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x7 ]] 00:13:32.947 18:11:43 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[lpa]="0x7"' 00:13:32.947 18:11:43 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[lpa]=0x7 00:13:32.947 18:11:43 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:32.947 18:11:43 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:32.947 18:11:43 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:32.947 18:11:43 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[elpe]="0"' 00:13:32.947 18:11:43 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[elpe]=0 00:13:32.947 18:11:43 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:32.947 18:11:43 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:32.947 18:11:43 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:32.947 18:11:43 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[npss]="0"' 00:13:32.947 18:11:43 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[npss]=0 00:13:32.947 18:11:43 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:32.947 18:11:43 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:32.947 18:11:43 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:32.947 18:11:43 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[avscc]="0"' 00:13:32.947 18:11:43 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[avscc]=0 00:13:32.947 18:11:43 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:32.947 18:11:43 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:32.947 18:11:43 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:32.947 18:11:43 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[apsta]="0"' 00:13:32.947 18:11:43 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[apsta]=0 00:13:32.947 18:11:43 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:32.947 18:11:43 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:32.947 18:11:43 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 343 ]] 00:13:32.947 18:11:43 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[wctemp]="343"' 00:13:32.947 18:11:43 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[wctemp]=343 00:13:32.947 18:11:43 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:32.947 18:11:43 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:32.947 18:11:43 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 373 ]] 00:13:32.947 18:11:43 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[cctemp]="373"' 00:13:32.947 18:11:43 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[cctemp]=373 00:13:32.947 18:11:43 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:32.947 18:11:43 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:32.947 18:11:43 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:32.947 18:11:43 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[mtfa]="0"' 00:13:32.947 18:11:43 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[mtfa]=0 00:13:32.947 18:11:43 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:32.947 18:11:43 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:32.947 18:11:43 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:32.947 18:11:43 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[hmpre]="0"' 00:13:32.947 18:11:43 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[hmpre]=0 00:13:32.947 18:11:43 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:32.947 18:11:43 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:32.947 18:11:43 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:32.947 18:11:43 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[hmmin]="0"' 00:13:32.947 18:11:43 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[hmmin]=0 00:13:32.947 18:11:43 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:32.947 18:11:43 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:32.947 18:11:43 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:32.947 18:11:43 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[tnvmcap]="0"' 00:13:32.947 18:11:43 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[tnvmcap]=0 00:13:32.947 18:11:43 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:32.947 18:11:43 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:32.947 18:11:43 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:32.947 18:11:43 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[unvmcap]="0"' 00:13:32.947 18:11:43 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[unvmcap]=0 00:13:32.947 18:11:43 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:32.947 18:11:43 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:32.947 18:11:43 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:32.947 18:11:43 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[rpmbs]="0"' 00:13:32.947 18:11:43 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[rpmbs]=0 00:13:32.947 18:11:43 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:32.947 18:11:43 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:32.947 18:11:43 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:32.947 18:11:43 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[edstt]="0"' 00:13:32.947 18:11:43 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[edstt]=0 00:13:32.947 18:11:43 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:32.947 18:11:43 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:32.947 18:11:43 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:32.947 18:11:43 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[dsto]="0"' 00:13:32.947 18:11:43 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[dsto]=0 00:13:32.947 18:11:43 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:32.947 18:11:43 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:32.947 18:11:43 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:32.947 18:11:43 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[fwug]="0"' 00:13:32.947 18:11:43 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[fwug]=0 00:13:32.947 18:11:43 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:32.947 18:11:43 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:32.947 18:11:43 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:32.947 18:11:43 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[kas]="0"' 00:13:32.947 18:11:43 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[kas]=0 00:13:32.947 18:11:43 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:32.947 18:11:43 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:32.947 18:11:43 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:32.947 18:11:43 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[hctma]="0"' 00:13:32.947 18:11:43 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[hctma]=0 00:13:32.947 18:11:43 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:32.947 18:11:43 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:32.947 18:11:43 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:32.947 18:11:43 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[mntmt]="0"' 00:13:32.947 18:11:43 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[mntmt]=0 00:13:32.947 18:11:43 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:32.947 18:11:43 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:32.947 18:11:43 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:32.947 18:11:43 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[mxtmt]="0"' 00:13:32.947 18:11:43 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[mxtmt]=0 00:13:32.947 18:11:43 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:32.947 18:11:43 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:32.947 18:11:43 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:32.947 18:11:43 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[sanicap]="0"' 00:13:32.947 18:11:43 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[sanicap]=0 00:13:32.947 18:11:43 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:32.947 18:11:43 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:32.947 18:11:43 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:32.947 18:11:43 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[hmminds]="0"' 00:13:32.947 18:11:43 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[hmminds]=0 00:13:32.947 18:11:43 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:32.947 18:11:43 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:32.947 18:11:43 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:32.947 18:11:43 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[hmmaxd]="0"' 00:13:32.947 18:11:43 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[hmmaxd]=0 00:13:32.947 18:11:43 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:32.947 18:11:43 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:32.947 18:11:43 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:32.947 18:11:43 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[nsetidmax]="0"' 00:13:32.947 18:11:43 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[nsetidmax]=0 00:13:32.947 18:11:43 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:32.947 18:11:43 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:32.947 18:11:43 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:32.947 18:11:43 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[endgidmax]="0"' 00:13:32.947 18:11:43 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[endgidmax]=0 00:13:32.947 18:11:43 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:32.947 18:11:43 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:32.948 18:11:43 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:32.948 18:11:43 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[anatt]="0"' 00:13:32.948 18:11:43 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[anatt]=0 00:13:32.948 18:11:43 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:32.948 18:11:43 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:32.948 18:11:43 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:32.948 18:11:43 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[anacap]="0"' 00:13:32.948 18:11:43 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[anacap]=0 00:13:32.948 18:11:43 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:32.948 18:11:43 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:32.948 18:11:43 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:32.948 18:11:43 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[anagrpmax]="0"' 00:13:32.948 18:11:43 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[anagrpmax]=0 00:13:32.948 18:11:43 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:32.948 18:11:43 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:32.948 18:11:43 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:32.948 18:11:43 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[nanagrpid]="0"' 00:13:32.948 18:11:43 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[nanagrpid]=0 00:13:32.948 18:11:43 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:32.948 18:11:43 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:32.948 18:11:43 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:32.948 18:11:43 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[pels]="0"' 00:13:32.948 18:11:43 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[pels]=0 00:13:32.948 18:11:43 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:32.948 18:11:43 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:32.948 18:11:43 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:32.948 18:11:43 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[domainid]="0"' 00:13:32.948 18:11:43 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[domainid]=0 00:13:32.948 18:11:43 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:32.948 18:11:43 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:32.948 18:11:43 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:32.948 18:11:43 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[megcap]="0"' 00:13:32.948 18:11:43 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[megcap]=0 00:13:32.948 18:11:43 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:32.948 18:11:43 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:32.948 18:11:43 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x66 ]] 00:13:32.948 18:11:43 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[sqes]="0x66"' 00:13:32.948 18:11:43 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[sqes]=0x66 00:13:32.948 18:11:43 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:32.948 18:11:43 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:32.948 18:11:43 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x44 ]] 00:13:32.948 18:11:43 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[cqes]="0x44"' 00:13:32.948 18:11:43 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[cqes]=0x44 00:13:32.948 18:11:43 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:32.948 18:11:43 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:32.948 18:11:43 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:32.948 18:11:43 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[maxcmd]="0"' 00:13:32.948 18:11:43 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[maxcmd]=0 00:13:32.948 18:11:43 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:32.948 18:11:43 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:32.948 18:11:43 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 256 ]] 00:13:32.948 18:11:43 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[nn]="256"' 00:13:32.948 18:11:43 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[nn]=256 00:13:32.948 18:11:43 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:32.948 18:11:43 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:32.948 18:11:43 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x15d ]] 00:13:32.948 18:11:43 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[oncs]="0x15d"' 00:13:32.948 18:11:43 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[oncs]=0x15d 00:13:32.948 18:11:43 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:32.948 18:11:43 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:32.948 18:11:43 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:32.948 18:11:43 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[fuses]="0"' 00:13:32.948 18:11:43 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[fuses]=0 00:13:32.948 18:11:43 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:32.948 18:11:43 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:32.948 18:11:43 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:32.948 18:11:43 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[fna]="0"' 00:13:32.948 18:11:43 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[fna]=0 00:13:32.948 18:11:43 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:32.948 18:11:43 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:32.948 18:11:43 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x7 ]] 00:13:32.948 18:11:43 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[vwc]="0x7"' 00:13:32.948 18:11:43 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[vwc]=0x7 00:13:32.948 18:11:43 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:32.948 18:11:43 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:32.948 18:11:43 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:32.948 18:11:43 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[awun]="0"' 00:13:32.948 18:11:43 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[awun]=0 00:13:32.948 18:11:43 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:32.948 18:11:43 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:32.948 18:11:43 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:32.948 18:11:43 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[awupf]="0"' 00:13:32.948 18:11:43 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[awupf]=0 00:13:32.948 18:11:43 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:32.948 18:11:43 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:32.948 18:11:43 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:32.948 18:11:43 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[icsvscc]="0"' 00:13:32.948 18:11:43 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[icsvscc]=0 00:13:32.948 18:11:43 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:32.948 18:11:43 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:32.948 18:11:43 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:32.948 18:11:43 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[nwpc]="0"' 00:13:32.948 18:11:43 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[nwpc]=0 00:13:32.948 18:11:43 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:32.948 18:11:43 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:32.948 18:11:43 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:32.948 18:11:43 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[acwu]="0"' 00:13:32.948 18:11:43 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[acwu]=0 00:13:32.948 18:11:43 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:32.948 18:11:43 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:32.948 18:11:43 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:13:32.948 18:11:43 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[ocfs]="0x3"' 00:13:32.948 18:11:43 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[ocfs]=0x3 00:13:32.948 18:11:43 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:32.948 18:11:43 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:32.948 18:11:43 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x1 ]] 00:13:32.948 18:11:43 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[sgls]="0x1"' 00:13:32.948 18:11:43 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[sgls]=0x1 00:13:32.948 18:11:43 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:32.948 18:11:43 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:32.948 18:11:43 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:32.948 18:11:43 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[mnan]="0"' 00:13:32.948 18:11:43 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[mnan]=0 00:13:32.948 18:11:43 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:32.948 18:11:43 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:32.948 18:11:43 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:32.948 18:11:43 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[maxdna]="0"' 00:13:32.948 18:11:43 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[maxdna]=0 00:13:32.948 18:11:43 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:32.948 18:11:43 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:32.948 18:11:43 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:32.948 18:11:43 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[maxcna]="0"' 00:13:32.948 18:11:43 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[maxcna]=0 00:13:32.948 18:11:43 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:32.948 18:11:43 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:32.948 18:11:43 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n nqn.2019-08.org.qemu:12341 ]] 00:13:32.948 18:11:43 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[subnqn]="nqn.2019-08.org.qemu:12341"' 00:13:32.948 18:11:43 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[subnqn]=nqn.2019-08.org.qemu:12341 00:13:32.948 18:11:43 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:32.948 18:11:43 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:32.948 18:11:43 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:32.948 18:11:43 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[ioccsz]="0"' 00:13:32.948 18:11:43 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[ioccsz]=0 00:13:32.948 18:11:43 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:32.948 18:11:43 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:32.948 18:11:43 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:32.948 18:11:43 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[iorcsz]="0"' 00:13:32.948 18:11:43 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[iorcsz]=0 00:13:32.948 18:11:43 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:32.948 18:11:43 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:32.948 18:11:43 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:32.948 18:11:43 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[icdoff]="0"' 00:13:32.948 18:11:43 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[icdoff]=0 00:13:32.948 18:11:43 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:32.948 18:11:43 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:32.948 18:11:43 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:32.948 18:11:43 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[fcatt]="0"' 00:13:32.948 18:11:43 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[fcatt]=0 00:13:32.948 18:11:43 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:32.948 18:11:43 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:32.948 18:11:43 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:32.948 18:11:43 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[msdbd]="0"' 00:13:32.948 18:11:43 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[msdbd]=0 00:13:32.948 18:11:43 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:32.948 18:11:43 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:32.948 18:11:43 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:32.948 18:11:43 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[ofcs]="0"' 00:13:32.948 18:11:43 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[ofcs]=0 00:13:32.948 18:11:43 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:32.948 18:11:43 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:32.948 18:11:43 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0 ]] 00:13:32.948 18:11:43 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[ps0]="mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0"' 00:13:32.948 18:11:43 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[ps0]='mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0' 00:13:32.948 18:11:43 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:32.948 18:11:43 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:32.948 18:11:43 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 rwl:0 idle_power:- active_power:- ]] 00:13:32.948 18:11:43 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[rwt]="0 rwl:0 idle_power:- active_power:-"' 00:13:32.948 18:11:43 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[rwt]='0 rwl:0 idle_power:- active_power:-' 00:13:32.948 18:11:43 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:32.948 18:11:43 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:32.948 18:11:43 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n - ]] 00:13:32.948 18:11:43 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[active_power_workload]="-"' 00:13:32.948 18:11:43 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[active_power_workload]=- 00:13:32.948 18:11:43 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:32.948 18:11:43 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:32.948 18:11:43 nvme_fdp -- nvme/functions.sh@53 -- # local -n _ctrl_ns=nvme0_ns 00:13:32.948 18:11:43 nvme_fdp -- nvme/functions.sh@54 -- # for ns in "$ctrl/"@("ng${ctrl##*nvme}"|"${ctrl##*/}n")* 00:13:32.948 18:11:43 nvme_fdp -- nvme/functions.sh@55 -- # [[ -e /sys/class/nvme/nvme0/ng0n1 ]] 00:13:32.948 18:11:43 nvme_fdp -- nvme/functions.sh@56 -- # ns_dev=ng0n1 00:13:32.948 18:11:43 nvme_fdp -- nvme/functions.sh@57 -- # nvme_get ng0n1 id-ns /dev/ng0n1 00:13:32.948 18:11:43 nvme_fdp -- nvme/functions.sh@17 -- # local ref=ng0n1 reg val 00:13:32.948 18:11:43 nvme_fdp -- nvme/functions.sh@18 -- # shift 00:13:32.948 18:11:43 nvme_fdp -- nvme/functions.sh@20 -- # local -gA 'ng0n1=()' 00:13:32.948 18:11:43 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:32.948 18:11:43 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:32.948 18:11:43 nvme_fdp -- nvme/functions.sh@16 -- # /usr/local/src/nvme-cli/nvme id-ns /dev/ng0n1 00:13:32.948 18:11:43 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n '' ]] 00:13:32.948 18:11:43 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:32.948 18:11:43 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:32.948 18:11:43 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x140000 ]] 00:13:32.948 18:11:43 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng0n1[nsze]="0x140000"' 00:13:32.948 18:11:43 nvme_fdp -- nvme/functions.sh@23 -- # ng0n1[nsze]=0x140000 00:13:32.948 18:11:43 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:32.948 18:11:43 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:32.948 18:11:43 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x140000 ]] 00:13:32.948 18:11:43 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng0n1[ncap]="0x140000"' 00:13:32.949 18:11:43 nvme_fdp -- nvme/functions.sh@23 -- # ng0n1[ncap]=0x140000 00:13:32.949 18:11:43 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:32.949 18:11:43 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:32.949 18:11:43 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x140000 ]] 00:13:32.949 18:11:43 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng0n1[nuse]="0x140000"' 00:13:32.949 18:11:43 nvme_fdp -- nvme/functions.sh@23 -- # ng0n1[nuse]=0x140000 00:13:32.949 18:11:43 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:32.949 18:11:43 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:32.949 18:11:43 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x14 ]] 00:13:32.949 18:11:43 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng0n1[nsfeat]="0x14"' 00:13:32.949 18:11:43 nvme_fdp -- nvme/functions.sh@23 -- # ng0n1[nsfeat]=0x14 00:13:32.949 18:11:43 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:32.949 18:11:43 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:32.949 18:11:43 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 7 ]] 00:13:32.949 18:11:43 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng0n1[nlbaf]="7"' 00:13:32.949 18:11:43 nvme_fdp -- nvme/functions.sh@23 -- # ng0n1[nlbaf]=7 00:13:32.949 18:11:43 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:32.949 18:11:43 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:32.949 18:11:43 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x4 ]] 00:13:32.949 18:11:43 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng0n1[flbas]="0x4"' 00:13:32.949 18:11:43 nvme_fdp -- nvme/functions.sh@23 -- # ng0n1[flbas]=0x4 00:13:32.949 18:11:43 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:32.949 18:11:43 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:32.949 18:11:43 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:13:32.949 18:11:43 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng0n1[mc]="0x3"' 00:13:32.949 18:11:43 nvme_fdp -- nvme/functions.sh@23 -- # ng0n1[mc]=0x3 00:13:32.949 18:11:43 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:32.949 18:11:43 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:32.949 18:11:43 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x1f ]] 00:13:32.949 18:11:43 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng0n1[dpc]="0x1f"' 00:13:32.949 18:11:43 nvme_fdp -- nvme/functions.sh@23 -- # ng0n1[dpc]=0x1f 00:13:32.949 18:11:43 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:32.949 18:11:43 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:32.949 18:11:43 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:32.949 18:11:43 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng0n1[dps]="0"' 00:13:32.949 18:11:43 nvme_fdp -- nvme/functions.sh@23 -- # ng0n1[dps]=0 00:13:32.949 18:11:43 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:32.949 18:11:43 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:32.949 18:11:43 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:32.949 18:11:43 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng0n1[nmic]="0"' 00:13:32.949 18:11:43 nvme_fdp -- nvme/functions.sh@23 -- # ng0n1[nmic]=0 00:13:32.949 18:11:43 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:32.949 18:11:43 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:32.949 18:11:43 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:32.949 18:11:43 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng0n1[rescap]="0"' 00:13:32.949 18:11:43 nvme_fdp -- nvme/functions.sh@23 -- # ng0n1[rescap]=0 00:13:32.949 18:11:43 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:32.949 18:11:43 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:32.949 18:11:43 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:32.949 18:11:43 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng0n1[fpi]="0"' 00:13:32.949 18:11:43 nvme_fdp -- nvme/functions.sh@23 -- # ng0n1[fpi]=0 00:13:32.949 18:11:43 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:32.949 18:11:43 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:32.949 18:11:43 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:13:32.949 18:11:43 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng0n1[dlfeat]="1"' 00:13:32.949 18:11:43 nvme_fdp -- nvme/functions.sh@23 -- # ng0n1[dlfeat]=1 00:13:32.949 18:11:43 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:32.949 18:11:43 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:32.949 18:11:43 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:32.949 18:11:43 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng0n1[nawun]="0"' 00:13:32.949 18:11:43 nvme_fdp -- nvme/functions.sh@23 -- # ng0n1[nawun]=0 00:13:32.949 18:11:43 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:32.949 18:11:43 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:32.949 18:11:43 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:32.949 18:11:43 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng0n1[nawupf]="0"' 00:13:32.949 18:11:43 nvme_fdp -- nvme/functions.sh@23 -- # ng0n1[nawupf]=0 00:13:32.949 18:11:43 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:32.949 18:11:43 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:32.949 18:11:43 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:32.949 18:11:43 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng0n1[nacwu]="0"' 00:13:32.949 18:11:43 nvme_fdp -- nvme/functions.sh@23 -- # ng0n1[nacwu]=0 00:13:32.949 18:11:43 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:32.949 18:11:43 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:32.949 18:11:43 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:32.949 18:11:43 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng0n1[nabsn]="0"' 00:13:32.949 18:11:43 nvme_fdp -- nvme/functions.sh@23 -- # ng0n1[nabsn]=0 00:13:32.949 18:11:43 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:32.949 18:11:43 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:32.949 18:11:43 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:32.949 18:11:43 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng0n1[nabo]="0"' 00:13:32.949 18:11:43 nvme_fdp -- nvme/functions.sh@23 -- # ng0n1[nabo]=0 00:13:32.949 18:11:43 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:32.949 18:11:43 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:32.949 18:11:43 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:32.949 18:11:43 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng0n1[nabspf]="0"' 00:13:32.949 18:11:43 nvme_fdp -- nvme/functions.sh@23 -- # ng0n1[nabspf]=0 00:13:32.949 18:11:43 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:32.949 18:11:43 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:32.949 18:11:43 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:32.949 18:11:43 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng0n1[noiob]="0"' 00:13:32.949 18:11:43 nvme_fdp -- nvme/functions.sh@23 -- # ng0n1[noiob]=0 00:13:32.949 18:11:43 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:32.949 18:11:43 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:32.949 18:11:43 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:32.949 18:11:43 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng0n1[nvmcap]="0"' 00:13:32.949 18:11:43 nvme_fdp -- nvme/functions.sh@23 -- # ng0n1[nvmcap]=0 00:13:32.949 18:11:43 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:32.949 18:11:43 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:32.949 18:11:43 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:32.949 18:11:43 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng0n1[npwg]="0"' 00:13:32.949 18:11:43 nvme_fdp -- nvme/functions.sh@23 -- # ng0n1[npwg]=0 00:13:32.949 18:11:43 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:32.949 18:11:43 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:32.949 18:11:43 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:32.949 18:11:43 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng0n1[npwa]="0"' 00:13:32.949 18:11:43 nvme_fdp -- nvme/functions.sh@23 -- # ng0n1[npwa]=0 00:13:32.949 18:11:43 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:32.949 18:11:43 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:32.949 18:11:43 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:32.949 18:11:43 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng0n1[npdg]="0"' 00:13:32.949 18:11:43 nvme_fdp -- nvme/functions.sh@23 -- # ng0n1[npdg]=0 00:13:32.949 18:11:43 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:32.949 18:11:43 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:32.949 18:11:43 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:32.949 18:11:43 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng0n1[npda]="0"' 00:13:32.949 18:11:43 nvme_fdp -- nvme/functions.sh@23 -- # ng0n1[npda]=0 00:13:32.949 18:11:43 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:32.949 18:11:43 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:32.949 18:11:43 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:32.949 18:11:43 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng0n1[nows]="0"' 00:13:32.949 18:11:43 nvme_fdp -- nvme/functions.sh@23 -- # ng0n1[nows]=0 00:13:32.949 18:11:43 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:32.949 18:11:43 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:32.949 18:11:43 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:13:32.949 18:11:43 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng0n1[mssrl]="128"' 00:13:32.949 18:11:43 nvme_fdp -- nvme/functions.sh@23 -- # ng0n1[mssrl]=128 00:13:32.949 18:11:43 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:32.949 18:11:43 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:32.949 18:11:43 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:13:32.949 18:11:43 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng0n1[mcl]="128"' 00:13:32.949 18:11:43 nvme_fdp -- nvme/functions.sh@23 -- # ng0n1[mcl]=128 00:13:32.949 18:11:43 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:32.949 18:11:43 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:32.949 18:11:43 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 127 ]] 00:13:32.949 18:11:43 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng0n1[msrc]="127"' 00:13:32.949 18:11:43 nvme_fdp -- nvme/functions.sh@23 -- # ng0n1[msrc]=127 00:13:32.949 18:11:43 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:32.949 18:11:43 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:32.949 18:11:43 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:32.949 18:11:43 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng0n1[nulbaf]="0"' 00:13:32.949 18:11:43 nvme_fdp -- nvme/functions.sh@23 -- # ng0n1[nulbaf]=0 00:13:32.949 18:11:43 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:32.949 18:11:43 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:32.949 18:11:43 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:32.949 18:11:43 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng0n1[anagrpid]="0"' 00:13:32.949 18:11:43 nvme_fdp -- nvme/functions.sh@23 -- # ng0n1[anagrpid]=0 00:13:32.949 18:11:43 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:32.949 18:11:43 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:32.949 18:11:43 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:32.949 18:11:43 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng0n1[nsattr]="0"' 00:13:32.949 18:11:43 nvme_fdp -- nvme/functions.sh@23 -- # ng0n1[nsattr]=0 00:13:32.949 18:11:43 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:32.949 18:11:43 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:32.949 18:11:43 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:32.949 18:11:43 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng0n1[nvmsetid]="0"' 00:13:32.949 18:11:43 nvme_fdp -- nvme/functions.sh@23 -- # ng0n1[nvmsetid]=0 00:13:32.949 18:11:43 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:32.949 18:11:43 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:32.949 18:11:43 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:32.949 18:11:43 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng0n1[endgid]="0"' 00:13:32.949 18:11:43 nvme_fdp -- nvme/functions.sh@23 -- # ng0n1[endgid]=0 00:13:32.949 18:11:43 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:32.949 18:11:43 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:32.949 18:11:43 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 00000000000000000000000000000000 ]] 00:13:32.949 18:11:43 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng0n1[nguid]="00000000000000000000000000000000"' 00:13:32.949 18:11:43 nvme_fdp -- nvme/functions.sh@23 -- # ng0n1[nguid]=00000000000000000000000000000000 00:13:32.949 18:11:43 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:32.949 18:11:43 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:32.949 18:11:43 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0000000000000000 ]] 00:13:32.949 18:11:43 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng0n1[eui64]="0000000000000000"' 00:13:32.949 18:11:43 nvme_fdp -- nvme/functions.sh@23 -- # ng0n1[eui64]=0000000000000000 00:13:32.949 18:11:43 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:32.949 18:11:43 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:32.949 18:11:43 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:9 rp:0 ]] 00:13:32.949 18:11:43 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng0n1[lbaf0]="ms:0 lbads:9 rp:0 "' 00:13:32.949 18:11:43 nvme_fdp -- nvme/functions.sh@23 -- # ng0n1[lbaf0]='ms:0 lbads:9 rp:0 ' 00:13:32.949 18:11:43 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:32.949 18:11:43 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:32.949 18:11:43 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:9 rp:0 ]] 00:13:32.949 18:11:43 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng0n1[lbaf1]="ms:8 lbads:9 rp:0 "' 00:13:32.949 18:11:43 nvme_fdp -- nvme/functions.sh@23 -- # ng0n1[lbaf1]='ms:8 lbads:9 rp:0 ' 00:13:32.949 18:11:43 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:32.949 18:11:43 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:32.949 18:11:43 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:9 rp:0 ]] 00:13:32.949 18:11:43 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng0n1[lbaf2]="ms:16 lbads:9 rp:0 "' 00:13:32.949 18:11:43 nvme_fdp -- nvme/functions.sh@23 -- # ng0n1[lbaf2]='ms:16 lbads:9 rp:0 ' 00:13:32.949 18:11:43 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:32.949 18:11:43 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:32.949 18:11:43 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:9 rp:0 ]] 00:13:32.949 18:11:43 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng0n1[lbaf3]="ms:64 lbads:9 rp:0 "' 00:13:32.949 18:11:43 nvme_fdp -- nvme/functions.sh@23 -- # ng0n1[lbaf3]='ms:64 lbads:9 rp:0 ' 00:13:32.949 18:11:43 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:32.949 18:11:43 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:32.949 18:11:43 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:12 rp:0 (in use) ]] 00:13:32.949 18:11:43 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng0n1[lbaf4]="ms:0 lbads:12 rp:0 (in use)"' 00:13:32.949 18:11:43 nvme_fdp -- nvme/functions.sh@23 -- # ng0n1[lbaf4]='ms:0 lbads:12 rp:0 (in use)' 00:13:32.949 18:11:43 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:32.949 18:11:43 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:32.949 18:11:43 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:12 rp:0 ]] 00:13:32.949 18:11:43 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng0n1[lbaf5]="ms:8 lbads:12 rp:0 "' 00:13:32.950 18:11:43 nvme_fdp -- nvme/functions.sh@23 -- # ng0n1[lbaf5]='ms:8 lbads:12 rp:0 ' 00:13:32.950 18:11:43 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:32.950 18:11:43 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:32.950 18:11:43 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:12 rp:0 ]] 00:13:32.950 18:11:43 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng0n1[lbaf6]="ms:16 lbads:12 rp:0 "' 00:13:32.950 18:11:43 nvme_fdp -- nvme/functions.sh@23 -- # ng0n1[lbaf6]='ms:16 lbads:12 rp:0 ' 00:13:32.950 18:11:43 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:32.950 18:11:43 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:32.950 18:11:43 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:12 rp:0 ]] 00:13:32.950 18:11:43 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng0n1[lbaf7]="ms:64 lbads:12 rp:0 "' 00:13:32.950 18:11:43 nvme_fdp -- nvme/functions.sh@23 -- # ng0n1[lbaf7]='ms:64 lbads:12 rp:0 ' 00:13:32.950 18:11:43 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:32.950 18:11:43 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:32.950 18:11:43 nvme_fdp -- nvme/functions.sh@58 -- # _ctrl_ns[${ns##*n}]=ng0n1 00:13:32.950 18:11:43 nvme_fdp -- nvme/functions.sh@54 -- # for ns in "$ctrl/"@("ng${ctrl##*nvme}"|"${ctrl##*/}n")* 00:13:32.950 18:11:43 nvme_fdp -- nvme/functions.sh@55 -- # [[ -e /sys/class/nvme/nvme0/nvme0n1 ]] 00:13:32.950 18:11:43 nvme_fdp -- nvme/functions.sh@56 -- # ns_dev=nvme0n1 00:13:32.950 18:11:43 nvme_fdp -- nvme/functions.sh@57 -- # nvme_get nvme0n1 id-ns /dev/nvme0n1 00:13:32.950 18:11:43 nvme_fdp -- nvme/functions.sh@17 -- # local ref=nvme0n1 reg val 00:13:32.950 18:11:43 nvme_fdp -- nvme/functions.sh@18 -- # shift 00:13:32.950 18:11:43 nvme_fdp -- nvme/functions.sh@20 -- # local -gA 'nvme0n1=()' 00:13:32.950 18:11:43 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:32.950 18:11:43 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:32.950 18:11:43 nvme_fdp -- nvme/functions.sh@16 -- # /usr/local/src/nvme-cli/nvme id-ns /dev/nvme0n1 00:13:32.950 18:11:43 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n '' ]] 00:13:32.950 18:11:43 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:32.950 18:11:43 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:32.950 18:11:43 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x140000 ]] 00:13:32.950 18:11:43 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[nsze]="0x140000"' 00:13:32.950 18:11:43 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[nsze]=0x140000 00:13:32.950 18:11:43 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:32.950 18:11:43 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:32.950 18:11:43 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x140000 ]] 00:13:32.950 18:11:43 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[ncap]="0x140000"' 00:13:32.950 18:11:43 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[ncap]=0x140000 00:13:32.950 18:11:43 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:32.950 18:11:43 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:32.950 18:11:43 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x140000 ]] 00:13:32.950 18:11:43 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[nuse]="0x140000"' 00:13:32.950 18:11:43 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[nuse]=0x140000 00:13:32.950 18:11:43 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:32.950 18:11:43 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:32.950 18:11:43 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x14 ]] 00:13:32.950 18:11:43 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[nsfeat]="0x14"' 00:13:32.950 18:11:43 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[nsfeat]=0x14 00:13:32.950 18:11:43 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:32.950 18:11:43 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:32.950 18:11:43 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 7 ]] 00:13:32.950 18:11:43 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[nlbaf]="7"' 00:13:32.950 18:11:43 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[nlbaf]=7 00:13:32.950 18:11:43 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:32.950 18:11:43 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:32.950 18:11:43 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x4 ]] 00:13:32.950 18:11:43 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[flbas]="0x4"' 00:13:32.950 18:11:43 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[flbas]=0x4 00:13:32.950 18:11:43 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:32.950 18:11:43 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:32.950 18:11:43 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:13:32.950 18:11:43 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[mc]="0x3"' 00:13:32.950 18:11:43 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[mc]=0x3 00:13:32.950 18:11:43 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:32.950 18:11:43 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:32.950 18:11:43 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x1f ]] 00:13:32.950 18:11:43 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[dpc]="0x1f"' 00:13:32.950 18:11:43 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[dpc]=0x1f 00:13:32.950 18:11:43 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:32.950 18:11:43 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:32.950 18:11:43 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:32.950 18:11:43 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[dps]="0"' 00:13:32.950 18:11:43 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[dps]=0 00:13:32.950 18:11:43 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:32.950 18:11:43 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:32.950 18:11:43 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:32.950 18:11:43 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[nmic]="0"' 00:13:32.950 18:11:43 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[nmic]=0 00:13:32.950 18:11:43 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:32.950 18:11:43 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:32.950 18:11:43 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:32.950 18:11:43 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[rescap]="0"' 00:13:32.950 18:11:43 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[rescap]=0 00:13:32.950 18:11:43 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:32.950 18:11:43 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:32.950 18:11:43 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:32.950 18:11:43 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[fpi]="0"' 00:13:32.950 18:11:43 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[fpi]=0 00:13:32.950 18:11:43 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:32.950 18:11:43 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:32.950 18:11:43 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:13:32.950 18:11:43 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[dlfeat]="1"' 00:13:32.950 18:11:43 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[dlfeat]=1 00:13:32.950 18:11:43 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:32.950 18:11:43 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:32.950 18:11:43 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:32.950 18:11:43 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[nawun]="0"' 00:13:32.950 18:11:43 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[nawun]=0 00:13:32.950 18:11:43 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:32.950 18:11:43 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:32.950 18:11:43 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:32.950 18:11:43 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[nawupf]="0"' 00:13:32.950 18:11:43 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[nawupf]=0 00:13:32.950 18:11:43 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:32.950 18:11:43 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:32.950 18:11:43 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:32.950 18:11:43 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[nacwu]="0"' 00:13:32.950 18:11:43 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[nacwu]=0 00:13:32.950 18:11:43 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:32.950 18:11:43 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:32.950 18:11:43 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:32.950 18:11:43 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[nabsn]="0"' 00:13:32.950 18:11:43 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[nabsn]=0 00:13:32.950 18:11:43 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:32.950 18:11:43 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:32.950 18:11:43 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:32.950 18:11:43 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[nabo]="0"' 00:13:32.950 18:11:43 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[nabo]=0 00:13:32.950 18:11:43 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:32.950 18:11:43 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:32.950 18:11:43 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:32.950 18:11:43 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[nabspf]="0"' 00:13:32.950 18:11:43 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[nabspf]=0 00:13:32.950 18:11:43 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:32.950 18:11:43 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:32.950 18:11:43 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:32.950 18:11:43 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[noiob]="0"' 00:13:32.950 18:11:43 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[noiob]=0 00:13:32.950 18:11:43 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:32.950 18:11:43 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:32.950 18:11:43 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:32.950 18:11:43 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[nvmcap]="0"' 00:13:32.950 18:11:43 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[nvmcap]=0 00:13:32.950 18:11:43 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:32.950 18:11:43 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:32.950 18:11:43 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:32.950 18:11:43 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[npwg]="0"' 00:13:32.950 18:11:43 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[npwg]=0 00:13:32.950 18:11:43 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:32.950 18:11:43 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:32.950 18:11:43 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:32.950 18:11:43 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[npwa]="0"' 00:13:32.950 18:11:43 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[npwa]=0 00:13:32.950 18:11:43 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:32.950 18:11:43 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:32.950 18:11:43 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:32.950 18:11:43 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[npdg]="0"' 00:13:32.950 18:11:43 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[npdg]=0 00:13:32.950 18:11:43 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:32.950 18:11:43 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:32.950 18:11:43 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:32.950 18:11:43 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[npda]="0"' 00:13:32.950 18:11:43 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[npda]=0 00:13:32.950 18:11:43 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:32.950 18:11:43 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:32.950 18:11:43 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:32.950 18:11:43 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[nows]="0"' 00:13:32.950 18:11:43 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[nows]=0 00:13:32.950 18:11:43 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:32.950 18:11:43 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:32.950 18:11:43 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:13:32.950 18:11:43 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[mssrl]="128"' 00:13:32.950 18:11:43 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[mssrl]=128 00:13:32.950 18:11:43 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:32.950 18:11:43 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:32.950 18:11:43 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:13:32.950 18:11:43 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[mcl]="128"' 00:13:32.950 18:11:43 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[mcl]=128 00:13:32.950 18:11:43 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:32.950 18:11:43 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:32.950 18:11:43 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 127 ]] 00:13:32.950 18:11:43 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[msrc]="127"' 00:13:32.950 18:11:43 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[msrc]=127 00:13:32.950 18:11:43 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:32.950 18:11:43 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:32.950 18:11:43 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:32.950 18:11:43 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[nulbaf]="0"' 00:13:32.950 18:11:43 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[nulbaf]=0 00:13:32.951 18:11:43 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:32.951 18:11:43 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:32.951 18:11:43 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:32.951 18:11:43 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[anagrpid]="0"' 00:13:32.951 18:11:43 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[anagrpid]=0 00:13:32.951 18:11:43 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:32.951 18:11:43 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:32.951 18:11:43 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:32.951 18:11:43 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[nsattr]="0"' 00:13:32.951 18:11:43 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[nsattr]=0 00:13:32.951 18:11:43 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:32.951 18:11:43 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:32.951 18:11:43 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:32.951 18:11:43 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[nvmsetid]="0"' 00:13:32.951 18:11:43 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[nvmsetid]=0 00:13:32.951 18:11:43 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:32.951 18:11:43 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:32.951 18:11:43 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:32.951 18:11:43 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[endgid]="0"' 00:13:32.951 18:11:43 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[endgid]=0 00:13:32.951 18:11:43 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:32.951 18:11:43 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:32.951 18:11:43 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 00000000000000000000000000000000 ]] 00:13:32.951 18:11:43 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[nguid]="00000000000000000000000000000000"' 00:13:32.951 18:11:43 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[nguid]=00000000000000000000000000000000 00:13:32.951 18:11:43 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:32.951 18:11:43 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:32.951 18:11:43 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0000000000000000 ]] 00:13:32.951 18:11:43 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[eui64]="0000000000000000"' 00:13:32.951 18:11:43 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[eui64]=0000000000000000 00:13:32.951 18:11:43 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:32.951 18:11:43 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:32.951 18:11:43 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:9 rp:0 ]] 00:13:32.951 18:11:43 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[lbaf0]="ms:0 lbads:9 rp:0 "' 00:13:32.951 18:11:43 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[lbaf0]='ms:0 lbads:9 rp:0 ' 00:13:32.951 18:11:43 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:32.951 18:11:43 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:32.951 18:11:43 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:9 rp:0 ]] 00:13:32.951 18:11:43 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[lbaf1]="ms:8 lbads:9 rp:0 "' 00:13:32.951 18:11:43 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[lbaf1]='ms:8 lbads:9 rp:0 ' 00:13:32.951 18:11:43 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:32.951 18:11:43 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:32.951 18:11:43 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:9 rp:0 ]] 00:13:32.951 18:11:43 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[lbaf2]="ms:16 lbads:9 rp:0 "' 00:13:32.951 18:11:43 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[lbaf2]='ms:16 lbads:9 rp:0 ' 00:13:32.951 18:11:43 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:32.951 18:11:43 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:32.951 18:11:43 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:9 rp:0 ]] 00:13:32.951 18:11:43 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[lbaf3]="ms:64 lbads:9 rp:0 "' 00:13:32.951 18:11:43 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[lbaf3]='ms:64 lbads:9 rp:0 ' 00:13:32.951 18:11:43 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:32.951 18:11:43 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:32.951 18:11:43 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:12 rp:0 (in use) ]] 00:13:32.951 18:11:43 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[lbaf4]="ms:0 lbads:12 rp:0 (in use)"' 00:13:32.951 18:11:43 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[lbaf4]='ms:0 lbads:12 rp:0 (in use)' 00:13:32.951 18:11:43 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:32.951 18:11:43 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:32.951 18:11:43 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:12 rp:0 ]] 00:13:32.951 18:11:43 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[lbaf5]="ms:8 lbads:12 rp:0 "' 00:13:32.951 18:11:43 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[lbaf5]='ms:8 lbads:12 rp:0 ' 00:13:32.951 18:11:43 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:32.951 18:11:43 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:32.951 18:11:43 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:12 rp:0 ]] 00:13:32.951 18:11:43 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[lbaf6]="ms:16 lbads:12 rp:0 "' 00:13:32.951 18:11:43 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[lbaf6]='ms:16 lbads:12 rp:0 ' 00:13:32.951 18:11:43 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:32.951 18:11:43 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:32.951 18:11:43 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:12 rp:0 ]] 00:13:32.951 18:11:43 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[lbaf7]="ms:64 lbads:12 rp:0 "' 00:13:32.951 18:11:43 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[lbaf7]='ms:64 lbads:12 rp:0 ' 00:13:32.951 18:11:43 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:32.951 18:11:43 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:32.951 18:11:43 nvme_fdp -- nvme/functions.sh@58 -- # _ctrl_ns[${ns##*n}]=nvme0n1 00:13:32.951 18:11:43 nvme_fdp -- nvme/functions.sh@60 -- # ctrls["$ctrl_dev"]=nvme0 00:13:32.951 18:11:43 nvme_fdp -- nvme/functions.sh@61 -- # nvmes["$ctrl_dev"]=nvme0_ns 00:13:32.951 18:11:43 nvme_fdp -- nvme/functions.sh@62 -- # bdfs["$ctrl_dev"]=0000:00:11.0 00:13:32.951 18:11:43 nvme_fdp -- nvme/functions.sh@63 -- # ordered_ctrls[${ctrl_dev/nvme/}]=nvme0 00:13:32.951 18:11:43 nvme_fdp -- nvme/functions.sh@47 -- # for ctrl in /sys/class/nvme/nvme* 00:13:32.951 18:11:43 nvme_fdp -- nvme/functions.sh@48 -- # [[ -e /sys/class/nvme/nvme1 ]] 00:13:32.951 18:11:43 nvme_fdp -- nvme/functions.sh@49 -- # pci=0000:00:10.0 00:13:32.951 18:11:43 nvme_fdp -- nvme/functions.sh@50 -- # pci_can_use 0000:00:10.0 00:13:32.951 18:11:43 nvme_fdp -- scripts/common.sh@18 -- # local i 00:13:32.951 18:11:43 nvme_fdp -- scripts/common.sh@21 -- # [[ =~ 0000:00:10.0 ]] 00:13:32.951 18:11:43 nvme_fdp -- scripts/common.sh@25 -- # [[ -z '' ]] 00:13:32.951 18:11:43 nvme_fdp -- scripts/common.sh@27 -- # return 0 00:13:32.951 18:11:43 nvme_fdp -- nvme/functions.sh@51 -- # ctrl_dev=nvme1 00:13:32.951 18:11:43 nvme_fdp -- nvme/functions.sh@52 -- # nvme_get nvme1 id-ctrl /dev/nvme1 00:13:32.951 18:11:43 nvme_fdp -- nvme/functions.sh@17 -- # local ref=nvme1 reg val 00:13:32.951 18:11:43 nvme_fdp -- nvme/functions.sh@18 -- # shift 00:13:32.951 18:11:43 nvme_fdp -- nvme/functions.sh@20 -- # local -gA 'nvme1=()' 00:13:32.951 18:11:43 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:32.951 18:11:43 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:32.951 18:11:43 nvme_fdp -- nvme/functions.sh@16 -- # /usr/local/src/nvme-cli/nvme id-ctrl /dev/nvme1 00:13:32.951 18:11:43 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n '' ]] 00:13:32.951 18:11:43 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:32.951 18:11:43 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:32.951 18:11:43 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x1b36 ]] 00:13:32.951 18:11:43 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[vid]="0x1b36"' 00:13:32.951 18:11:43 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[vid]=0x1b36 00:13:32.951 18:11:43 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:32.951 18:11:43 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:32.951 18:11:43 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x1af4 ]] 00:13:32.951 18:11:43 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[ssvid]="0x1af4"' 00:13:32.951 18:11:43 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[ssvid]=0x1af4 00:13:32.951 18:11:43 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:32.951 18:11:43 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:32.951 18:11:43 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 12340 ]] 00:13:32.951 18:11:43 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[sn]="12340 "' 00:13:32.951 18:11:43 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[sn]='12340 ' 00:13:32.951 18:11:43 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:32.951 18:11:43 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:32.951 18:11:43 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n QEMU NVMe Ctrl ]] 00:13:32.951 18:11:43 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[mn]="QEMU NVMe Ctrl "' 00:13:32.951 18:11:43 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[mn]='QEMU NVMe Ctrl ' 00:13:32.951 18:11:43 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:32.951 18:11:43 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:32.951 18:11:43 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 8.0.0 ]] 00:13:32.951 18:11:43 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[fr]="8.0.0 "' 00:13:32.951 18:11:43 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[fr]='8.0.0 ' 00:13:32.951 18:11:43 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:32.951 18:11:43 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:32.951 18:11:43 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 6 ]] 00:13:32.951 18:11:43 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[rab]="6"' 00:13:32.951 18:11:43 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[rab]=6 00:13:32.951 18:11:43 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:32.951 18:11:43 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:32.951 18:11:43 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 525400 ]] 00:13:32.951 18:11:43 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[ieee]="525400"' 00:13:32.951 18:11:43 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[ieee]=525400 00:13:32.951 18:11:43 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:32.951 18:11:43 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:32.951 18:11:43 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:32.951 18:11:43 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[cmic]="0"' 00:13:32.951 18:11:43 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[cmic]=0 00:13:32.951 18:11:43 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:32.951 18:11:43 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:32.951 18:11:43 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 7 ]] 00:13:32.951 18:11:43 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[mdts]="7"' 00:13:32.951 18:11:43 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[mdts]=7 00:13:32.951 18:11:43 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:32.951 18:11:43 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:32.951 18:11:43 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:32.951 18:11:43 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[cntlid]="0"' 00:13:32.951 18:11:43 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[cntlid]=0 00:13:32.951 18:11:43 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:32.951 18:11:43 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:32.951 18:11:43 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x10400 ]] 00:13:32.951 18:11:43 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[ver]="0x10400"' 00:13:32.951 18:11:43 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[ver]=0x10400 00:13:32.951 18:11:43 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:32.951 18:11:43 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:32.951 18:11:43 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:32.951 18:11:43 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[rtd3r]="0"' 00:13:32.951 18:11:43 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[rtd3r]=0 00:13:32.951 18:11:43 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:32.951 18:11:43 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:32.951 18:11:43 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:32.951 18:11:43 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[rtd3e]="0"' 00:13:32.951 18:11:43 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[rtd3e]=0 00:13:32.951 18:11:43 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:32.951 18:11:43 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:32.951 18:11:43 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x100 ]] 00:13:32.951 18:11:43 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[oaes]="0x100"' 00:13:32.951 18:11:43 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[oaes]=0x100 00:13:32.951 18:11:43 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:32.951 18:11:43 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:32.951 18:11:43 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x8000 ]] 00:13:32.951 18:11:43 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[ctratt]="0x8000"' 00:13:32.951 18:11:43 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[ctratt]=0x8000 00:13:32.951 18:11:43 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:32.951 18:11:43 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:32.951 18:11:43 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:32.951 18:11:43 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[rrls]="0"' 00:13:32.951 18:11:43 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[rrls]=0 00:13:32.951 18:11:43 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:32.951 18:11:43 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:32.951 18:11:43 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:13:32.951 18:11:43 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[cntrltype]="1"' 00:13:32.951 18:11:43 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[cntrltype]=1 00:13:32.951 18:11:43 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:32.951 18:11:43 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:32.951 18:11:43 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 00000000-0000-0000-0000-000000000000 ]] 00:13:32.951 18:11:43 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[fguid]="00000000-0000-0000-0000-000000000000"' 00:13:32.951 18:11:43 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[fguid]=00000000-0000-0000-0000-000000000000 00:13:32.951 18:11:43 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:32.951 18:11:43 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:32.951 18:11:43 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:32.951 18:11:43 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[crdt1]="0"' 00:13:32.951 18:11:43 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[crdt1]=0 00:13:32.951 18:11:43 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:32.951 18:11:43 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:32.951 18:11:43 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:32.951 18:11:43 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[crdt2]="0"' 00:13:32.951 18:11:43 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[crdt2]=0 00:13:32.951 18:11:43 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:32.951 18:11:43 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:32.951 18:11:43 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:32.951 18:11:43 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[crdt3]="0"' 00:13:32.951 18:11:43 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[crdt3]=0 00:13:32.951 18:11:43 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:32.951 18:11:43 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:32.951 18:11:43 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:32.951 18:11:43 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[nvmsr]="0"' 00:13:32.951 18:11:43 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[nvmsr]=0 00:13:32.952 18:11:43 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:32.952 18:11:43 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:32.952 18:11:43 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:32.952 18:11:43 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[vwci]="0"' 00:13:32.952 18:11:43 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[vwci]=0 00:13:32.952 18:11:43 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:32.952 18:11:43 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:32.952 18:11:43 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:32.952 18:11:43 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[mec]="0"' 00:13:32.952 18:11:43 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[mec]=0 00:13:32.952 18:11:43 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:32.952 18:11:43 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:32.952 18:11:43 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x12a ]] 00:13:32.952 18:11:43 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[oacs]="0x12a"' 00:13:32.952 18:11:43 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[oacs]=0x12a 00:13:32.952 18:11:43 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:32.952 18:11:43 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:32.952 18:11:43 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 3 ]] 00:13:32.952 18:11:43 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[acl]="3"' 00:13:32.952 18:11:43 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[acl]=3 00:13:32.952 18:11:43 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:32.952 18:11:43 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:32.952 18:11:43 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 3 ]] 00:13:32.952 18:11:43 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[aerl]="3"' 00:13:32.952 18:11:43 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[aerl]=3 00:13:32.952 18:11:43 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:32.952 18:11:43 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:32.952 18:11:43 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:13:32.952 18:11:43 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[frmw]="0x3"' 00:13:32.952 18:11:43 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[frmw]=0x3 00:13:32.952 18:11:43 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:32.952 18:11:43 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:32.952 18:11:43 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x7 ]] 00:13:32.952 18:11:43 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[lpa]="0x7"' 00:13:32.952 18:11:43 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[lpa]=0x7 00:13:32.952 18:11:43 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:32.952 18:11:43 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:32.952 18:11:43 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:32.952 18:11:43 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[elpe]="0"' 00:13:32.952 18:11:43 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[elpe]=0 00:13:32.952 18:11:43 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:32.952 18:11:43 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:32.952 18:11:43 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:32.952 18:11:43 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[npss]="0"' 00:13:32.952 18:11:43 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[npss]=0 00:13:32.952 18:11:43 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:32.952 18:11:43 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:32.952 18:11:43 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:32.952 18:11:43 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[avscc]="0"' 00:13:32.952 18:11:43 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[avscc]=0 00:13:32.952 18:11:43 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:32.952 18:11:43 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:32.952 18:11:43 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:32.952 18:11:43 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[apsta]="0"' 00:13:32.952 18:11:43 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[apsta]=0 00:13:32.952 18:11:43 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:32.952 18:11:43 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:32.952 18:11:43 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 343 ]] 00:13:32.952 18:11:43 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[wctemp]="343"' 00:13:32.952 18:11:43 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[wctemp]=343 00:13:32.952 18:11:43 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:32.952 18:11:43 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:32.952 18:11:43 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 373 ]] 00:13:32.952 18:11:43 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[cctemp]="373"' 00:13:32.952 18:11:43 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[cctemp]=373 00:13:32.952 18:11:43 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:32.952 18:11:43 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:32.952 18:11:43 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:32.952 18:11:43 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[mtfa]="0"' 00:13:32.952 18:11:43 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[mtfa]=0 00:13:32.952 18:11:43 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:32.952 18:11:43 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:32.952 18:11:43 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:32.952 18:11:43 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[hmpre]="0"' 00:13:32.952 18:11:43 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[hmpre]=0 00:13:32.952 18:11:43 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:32.952 18:11:43 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:32.952 18:11:43 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:32.952 18:11:43 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[hmmin]="0"' 00:13:32.952 18:11:43 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[hmmin]=0 00:13:32.952 18:11:43 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:32.952 18:11:43 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:32.952 18:11:43 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:32.952 18:11:43 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[tnvmcap]="0"' 00:13:32.952 18:11:43 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[tnvmcap]=0 00:13:32.952 18:11:43 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:32.952 18:11:43 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:32.952 18:11:43 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:32.952 18:11:43 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[unvmcap]="0"' 00:13:32.952 18:11:43 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[unvmcap]=0 00:13:32.952 18:11:43 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:32.952 18:11:43 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:32.952 18:11:43 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:32.952 18:11:43 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[rpmbs]="0"' 00:13:32.952 18:11:43 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[rpmbs]=0 00:13:32.952 18:11:43 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:32.952 18:11:43 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:32.952 18:11:43 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:32.952 18:11:43 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[edstt]="0"' 00:13:32.952 18:11:43 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[edstt]=0 00:13:32.952 18:11:43 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:32.952 18:11:43 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:32.952 18:11:43 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:32.952 18:11:43 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[dsto]="0"' 00:13:32.952 18:11:43 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[dsto]=0 00:13:32.952 18:11:43 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:32.952 18:11:43 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:32.952 18:11:43 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:32.952 18:11:43 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[fwug]="0"' 00:13:32.952 18:11:43 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[fwug]=0 00:13:32.952 18:11:43 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:32.952 18:11:43 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:32.952 18:11:43 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:32.952 18:11:43 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[kas]="0"' 00:13:32.952 18:11:43 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[kas]=0 00:13:32.952 18:11:43 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:32.952 18:11:43 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:32.952 18:11:43 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:32.952 18:11:43 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[hctma]="0"' 00:13:32.952 18:11:43 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[hctma]=0 00:13:32.952 18:11:43 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:32.952 18:11:43 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:32.952 18:11:43 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:32.952 18:11:43 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[mntmt]="0"' 00:13:32.952 18:11:43 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[mntmt]=0 00:13:32.952 18:11:43 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:32.952 18:11:43 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:32.952 18:11:43 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:32.952 18:11:43 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[mxtmt]="0"' 00:13:32.952 18:11:43 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[mxtmt]=0 00:13:32.952 18:11:43 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:32.952 18:11:43 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:32.952 18:11:43 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:32.952 18:11:43 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[sanicap]="0"' 00:13:32.952 18:11:43 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[sanicap]=0 00:13:32.952 18:11:43 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:32.952 18:11:43 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:32.952 18:11:43 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:32.952 18:11:43 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[hmminds]="0"' 00:13:32.952 18:11:43 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[hmminds]=0 00:13:32.952 18:11:43 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:32.952 18:11:43 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:32.952 18:11:43 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:32.952 18:11:43 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[hmmaxd]="0"' 00:13:32.952 18:11:43 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[hmmaxd]=0 00:13:32.952 18:11:43 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:32.952 18:11:43 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:32.952 18:11:43 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:32.952 18:11:43 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[nsetidmax]="0"' 00:13:32.952 18:11:43 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[nsetidmax]=0 00:13:32.952 18:11:43 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:32.952 18:11:43 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:32.952 18:11:43 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:32.952 18:11:43 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[endgidmax]="0"' 00:13:32.952 18:11:43 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[endgidmax]=0 00:13:32.952 18:11:43 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:32.952 18:11:43 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:32.952 18:11:43 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:32.952 18:11:43 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[anatt]="0"' 00:13:32.952 18:11:43 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[anatt]=0 00:13:32.952 18:11:43 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:32.952 18:11:43 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:32.952 18:11:43 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:32.952 18:11:43 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[anacap]="0"' 00:13:32.952 18:11:43 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[anacap]=0 00:13:32.952 18:11:43 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:32.952 18:11:43 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:32.952 18:11:43 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:32.952 18:11:43 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[anagrpmax]="0"' 00:13:32.952 18:11:43 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[anagrpmax]=0 00:13:32.952 18:11:43 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:32.952 18:11:43 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:32.952 18:11:43 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:32.952 18:11:43 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[nanagrpid]="0"' 00:13:32.952 18:11:43 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[nanagrpid]=0 00:13:32.952 18:11:43 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:32.952 18:11:43 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:32.952 18:11:43 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:32.952 18:11:43 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[pels]="0"' 00:13:32.952 18:11:43 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[pels]=0 00:13:32.952 18:11:43 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:32.952 18:11:43 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:32.952 18:11:43 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:32.952 18:11:43 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[domainid]="0"' 00:13:32.952 18:11:43 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[domainid]=0 00:13:32.952 18:11:43 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:32.952 18:11:43 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:32.952 18:11:43 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:32.952 18:11:43 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[megcap]="0"' 00:13:32.952 18:11:43 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[megcap]=0 00:13:32.952 18:11:43 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:32.952 18:11:43 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:32.952 18:11:43 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x66 ]] 00:13:32.952 18:11:43 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[sqes]="0x66"' 00:13:32.952 18:11:43 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[sqes]=0x66 00:13:32.952 18:11:43 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:32.952 18:11:43 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:32.952 18:11:43 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x44 ]] 00:13:32.952 18:11:43 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[cqes]="0x44"' 00:13:32.952 18:11:43 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[cqes]=0x44 00:13:32.952 18:11:43 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:32.952 18:11:43 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:32.952 18:11:43 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:32.952 18:11:43 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[maxcmd]="0"' 00:13:32.952 18:11:43 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[maxcmd]=0 00:13:32.952 18:11:43 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:32.952 18:11:43 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:32.952 18:11:43 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 256 ]] 00:13:32.952 18:11:43 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[nn]="256"' 00:13:32.952 18:11:43 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[nn]=256 00:13:32.952 18:11:43 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:32.952 18:11:43 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:32.952 18:11:43 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x15d ]] 00:13:32.952 18:11:43 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[oncs]="0x15d"' 00:13:32.952 18:11:43 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[oncs]=0x15d 00:13:32.952 18:11:43 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:32.953 18:11:43 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:32.953 18:11:43 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:32.953 18:11:43 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[fuses]="0"' 00:13:32.953 18:11:43 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[fuses]=0 00:13:32.953 18:11:43 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:32.953 18:11:43 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:32.953 18:11:43 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:32.953 18:11:43 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[fna]="0"' 00:13:32.953 18:11:43 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[fna]=0 00:13:32.953 18:11:43 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:32.953 18:11:43 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:32.953 18:11:43 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x7 ]] 00:13:32.953 18:11:43 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[vwc]="0x7"' 00:13:32.953 18:11:43 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[vwc]=0x7 00:13:32.953 18:11:43 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:32.953 18:11:43 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:32.953 18:11:43 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:32.953 18:11:43 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[awun]="0"' 00:13:32.953 18:11:43 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[awun]=0 00:13:32.953 18:11:43 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:32.953 18:11:43 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:32.953 18:11:43 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:32.953 18:11:43 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[awupf]="0"' 00:13:32.953 18:11:43 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[awupf]=0 00:13:32.953 18:11:43 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:32.953 18:11:43 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:32.953 18:11:43 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:32.953 18:11:43 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[icsvscc]="0"' 00:13:32.953 18:11:43 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[icsvscc]=0 00:13:32.953 18:11:43 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:32.953 18:11:43 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:32.953 18:11:43 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:32.953 18:11:43 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[nwpc]="0"' 00:13:32.953 18:11:43 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[nwpc]=0 00:13:32.953 18:11:43 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:32.953 18:11:43 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:32.953 18:11:43 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:32.953 18:11:43 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[acwu]="0"' 00:13:32.953 18:11:43 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[acwu]=0 00:13:32.953 18:11:43 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:32.953 18:11:43 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:32.953 18:11:43 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:13:32.953 18:11:43 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[ocfs]="0x3"' 00:13:32.953 18:11:43 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[ocfs]=0x3 00:13:32.953 18:11:43 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:32.953 18:11:43 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:32.953 18:11:43 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x1 ]] 00:13:32.953 18:11:43 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[sgls]="0x1"' 00:13:32.953 18:11:43 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[sgls]=0x1 00:13:32.953 18:11:43 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:32.953 18:11:43 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:32.953 18:11:43 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:32.953 18:11:43 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[mnan]="0"' 00:13:32.953 18:11:43 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[mnan]=0 00:13:32.953 18:11:43 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:32.953 18:11:43 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:32.953 18:11:43 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:32.953 18:11:43 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[maxdna]="0"' 00:13:32.953 18:11:43 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[maxdna]=0 00:13:32.953 18:11:43 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:32.953 18:11:43 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:32.953 18:11:43 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:32.953 18:11:43 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[maxcna]="0"' 00:13:32.953 18:11:43 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[maxcna]=0 00:13:32.953 18:11:43 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:32.953 18:11:43 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:32.953 18:11:43 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n nqn.2019-08.org.qemu:12340 ]] 00:13:32.953 18:11:43 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[subnqn]="nqn.2019-08.org.qemu:12340"' 00:13:32.953 18:11:43 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[subnqn]=nqn.2019-08.org.qemu:12340 00:13:32.953 18:11:43 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:32.953 18:11:43 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:32.953 18:11:43 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:32.953 18:11:43 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[ioccsz]="0"' 00:13:32.953 18:11:43 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[ioccsz]=0 00:13:32.953 18:11:43 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:32.953 18:11:43 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:32.953 18:11:43 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:32.953 18:11:43 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[iorcsz]="0"' 00:13:32.953 18:11:43 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[iorcsz]=0 00:13:32.953 18:11:43 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:32.953 18:11:43 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:32.953 18:11:43 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:32.953 18:11:43 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[icdoff]="0"' 00:13:32.953 18:11:43 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[icdoff]=0 00:13:32.953 18:11:43 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:32.953 18:11:43 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:32.953 18:11:43 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:32.953 18:11:43 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[fcatt]="0"' 00:13:32.953 18:11:43 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[fcatt]=0 00:13:32.953 18:11:43 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:32.953 18:11:43 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:32.953 18:11:43 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:32.953 18:11:43 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[msdbd]="0"' 00:13:32.953 18:11:43 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[msdbd]=0 00:13:32.953 18:11:43 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:32.953 18:11:43 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:32.953 18:11:43 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:32.953 18:11:43 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[ofcs]="0"' 00:13:32.953 18:11:43 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[ofcs]=0 00:13:32.953 18:11:43 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:32.953 18:11:43 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:32.953 18:11:43 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0 ]] 00:13:32.953 18:11:43 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[ps0]="mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0"' 00:13:32.953 18:11:43 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[ps0]='mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0' 00:13:32.953 18:11:43 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:32.953 18:11:43 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:32.953 18:11:43 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 rwl:0 idle_power:- active_power:- ]] 00:13:32.953 18:11:43 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[rwt]="0 rwl:0 idle_power:- active_power:-"' 00:13:32.953 18:11:43 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[rwt]='0 rwl:0 idle_power:- active_power:-' 00:13:32.953 18:11:43 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:32.953 18:11:43 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:32.953 18:11:43 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n - ]] 00:13:32.953 18:11:43 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[active_power_workload]="-"' 00:13:32.953 18:11:43 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[active_power_workload]=- 00:13:32.953 18:11:43 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:32.953 18:11:43 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:32.953 18:11:43 nvme_fdp -- nvme/functions.sh@53 -- # local -n _ctrl_ns=nvme1_ns 00:13:32.953 18:11:43 nvme_fdp -- nvme/functions.sh@54 -- # for ns in "$ctrl/"@("ng${ctrl##*nvme}"|"${ctrl##*/}n")* 00:13:32.953 18:11:43 nvme_fdp -- nvme/functions.sh@55 -- # [[ -e /sys/class/nvme/nvme1/ng1n1 ]] 00:13:32.953 18:11:43 nvme_fdp -- nvme/functions.sh@56 -- # ns_dev=ng1n1 00:13:32.953 18:11:43 nvme_fdp -- nvme/functions.sh@57 -- # nvme_get ng1n1 id-ns /dev/ng1n1 00:13:32.953 18:11:43 nvme_fdp -- nvme/functions.sh@17 -- # local ref=ng1n1 reg val 00:13:32.953 18:11:43 nvme_fdp -- nvme/functions.sh@18 -- # shift 00:13:32.953 18:11:43 nvme_fdp -- nvme/functions.sh@20 -- # local -gA 'ng1n1=()' 00:13:32.953 18:11:43 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:32.953 18:11:43 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:32.953 18:11:43 nvme_fdp -- nvme/functions.sh@16 -- # /usr/local/src/nvme-cli/nvme id-ns /dev/ng1n1 00:13:32.953 18:11:43 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n '' ]] 00:13:32.953 18:11:43 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:32.953 18:11:43 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:32.953 18:11:43 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x17a17a ]] 00:13:32.953 18:11:43 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng1n1[nsze]="0x17a17a"' 00:13:32.953 18:11:43 nvme_fdp -- nvme/functions.sh@23 -- # ng1n1[nsze]=0x17a17a 00:13:32.953 18:11:43 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:32.953 18:11:43 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:32.953 18:11:43 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x17a17a ]] 00:13:32.953 18:11:43 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng1n1[ncap]="0x17a17a"' 00:13:32.953 18:11:43 nvme_fdp -- nvme/functions.sh@23 -- # ng1n1[ncap]=0x17a17a 00:13:32.953 18:11:43 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:32.953 18:11:43 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:32.953 18:11:43 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x17a17a ]] 00:13:32.953 18:11:43 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng1n1[nuse]="0x17a17a"' 00:13:32.953 18:11:43 nvme_fdp -- nvme/functions.sh@23 -- # ng1n1[nuse]=0x17a17a 00:13:32.953 18:11:43 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:32.953 18:11:43 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:32.953 18:11:43 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x14 ]] 00:13:32.953 18:11:43 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng1n1[nsfeat]="0x14"' 00:13:32.953 18:11:43 nvme_fdp -- nvme/functions.sh@23 -- # ng1n1[nsfeat]=0x14 00:13:32.953 18:11:43 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:32.953 18:11:43 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:32.953 18:11:43 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 7 ]] 00:13:32.953 18:11:43 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng1n1[nlbaf]="7"' 00:13:32.953 18:11:43 nvme_fdp -- nvme/functions.sh@23 -- # ng1n1[nlbaf]=7 00:13:32.953 18:11:43 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:32.953 18:11:43 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:32.953 18:11:43 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x7 ]] 00:13:32.953 18:11:43 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng1n1[flbas]="0x7"' 00:13:32.953 18:11:43 nvme_fdp -- nvme/functions.sh@23 -- # ng1n1[flbas]=0x7 00:13:32.953 18:11:43 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:32.953 18:11:43 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:32.953 18:11:43 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:13:32.953 18:11:43 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng1n1[mc]="0x3"' 00:13:32.953 18:11:43 nvme_fdp -- nvme/functions.sh@23 -- # ng1n1[mc]=0x3 00:13:32.953 18:11:43 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:32.953 18:11:43 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:32.953 18:11:43 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x1f ]] 00:13:32.953 18:11:43 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng1n1[dpc]="0x1f"' 00:13:32.953 18:11:43 nvme_fdp -- nvme/functions.sh@23 -- # ng1n1[dpc]=0x1f 00:13:32.953 18:11:43 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:32.953 18:11:43 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:32.953 18:11:43 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:32.953 18:11:43 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng1n1[dps]="0"' 00:13:32.953 18:11:43 nvme_fdp -- nvme/functions.sh@23 -- # ng1n1[dps]=0 00:13:32.953 18:11:43 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:32.953 18:11:43 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:32.953 18:11:43 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:32.953 18:11:43 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng1n1[nmic]="0"' 00:13:32.953 18:11:43 nvme_fdp -- nvme/functions.sh@23 -- # ng1n1[nmic]=0 00:13:32.953 18:11:43 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:32.953 18:11:43 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:32.953 18:11:43 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:32.953 18:11:43 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng1n1[rescap]="0"' 00:13:32.953 18:11:43 nvme_fdp -- nvme/functions.sh@23 -- # ng1n1[rescap]=0 00:13:32.953 18:11:43 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:32.953 18:11:43 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:32.953 18:11:43 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:32.954 18:11:43 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng1n1[fpi]="0"' 00:13:32.954 18:11:43 nvme_fdp -- nvme/functions.sh@23 -- # ng1n1[fpi]=0 00:13:32.954 18:11:43 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:32.954 18:11:43 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:32.954 18:11:43 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:13:32.954 18:11:43 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng1n1[dlfeat]="1"' 00:13:32.954 18:11:43 nvme_fdp -- nvme/functions.sh@23 -- # ng1n1[dlfeat]=1 00:13:32.954 18:11:43 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:32.954 18:11:43 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:32.954 18:11:43 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:32.954 18:11:43 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng1n1[nawun]="0"' 00:13:32.954 18:11:43 nvme_fdp -- nvme/functions.sh@23 -- # ng1n1[nawun]=0 00:13:32.954 18:11:43 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:32.954 18:11:43 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:32.954 18:11:43 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:32.954 18:11:43 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng1n1[nawupf]="0"' 00:13:32.954 18:11:43 nvme_fdp -- nvme/functions.sh@23 -- # ng1n1[nawupf]=0 00:13:32.954 18:11:43 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:32.954 18:11:43 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:32.954 18:11:43 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:32.954 18:11:43 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng1n1[nacwu]="0"' 00:13:32.954 18:11:43 nvme_fdp -- nvme/functions.sh@23 -- # ng1n1[nacwu]=0 00:13:32.954 18:11:43 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:32.954 18:11:43 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:32.954 18:11:43 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:32.954 18:11:43 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng1n1[nabsn]="0"' 00:13:32.954 18:11:43 nvme_fdp -- nvme/functions.sh@23 -- # ng1n1[nabsn]=0 00:13:32.954 18:11:43 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:32.954 18:11:43 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:32.954 18:11:43 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:32.954 18:11:43 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng1n1[nabo]="0"' 00:13:32.954 18:11:43 nvme_fdp -- nvme/functions.sh@23 -- # ng1n1[nabo]=0 00:13:32.954 18:11:43 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:32.954 18:11:43 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:32.954 18:11:43 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:32.954 18:11:43 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng1n1[nabspf]="0"' 00:13:32.954 18:11:43 nvme_fdp -- nvme/functions.sh@23 -- # ng1n1[nabspf]=0 00:13:32.954 18:11:43 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:32.954 18:11:43 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:32.954 18:11:43 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:32.954 18:11:43 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng1n1[noiob]="0"' 00:13:32.954 18:11:43 nvme_fdp -- nvme/functions.sh@23 -- # ng1n1[noiob]=0 00:13:32.954 18:11:43 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:32.954 18:11:43 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:32.954 18:11:43 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:32.954 18:11:43 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng1n1[nvmcap]="0"' 00:13:32.954 18:11:43 nvme_fdp -- nvme/functions.sh@23 -- # ng1n1[nvmcap]=0 00:13:32.954 18:11:43 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:32.954 18:11:43 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:32.954 18:11:43 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:32.954 18:11:43 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng1n1[npwg]="0"' 00:13:32.954 18:11:43 nvme_fdp -- nvme/functions.sh@23 -- # ng1n1[npwg]=0 00:13:32.954 18:11:43 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:32.954 18:11:43 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:32.954 18:11:43 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:32.954 18:11:43 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng1n1[npwa]="0"' 00:13:32.954 18:11:43 nvme_fdp -- nvme/functions.sh@23 -- # ng1n1[npwa]=0 00:13:32.954 18:11:43 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:32.954 18:11:43 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:32.954 18:11:43 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:32.954 18:11:43 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng1n1[npdg]="0"' 00:13:32.954 18:11:43 nvme_fdp -- nvme/functions.sh@23 -- # ng1n1[npdg]=0 00:13:32.954 18:11:43 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:32.954 18:11:43 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:32.954 18:11:43 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:32.954 18:11:43 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng1n1[npda]="0"' 00:13:32.954 18:11:43 nvme_fdp -- nvme/functions.sh@23 -- # ng1n1[npda]=0 00:13:32.954 18:11:43 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:32.954 18:11:43 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:32.954 18:11:43 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:32.954 18:11:43 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng1n1[nows]="0"' 00:13:32.954 18:11:43 nvme_fdp -- nvme/functions.sh@23 -- # ng1n1[nows]=0 00:13:32.954 18:11:43 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:32.954 18:11:43 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:32.954 18:11:43 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:13:32.954 18:11:43 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng1n1[mssrl]="128"' 00:13:32.954 18:11:43 nvme_fdp -- nvme/functions.sh@23 -- # ng1n1[mssrl]=128 00:13:32.954 18:11:43 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:32.954 18:11:43 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:32.954 18:11:43 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:13:32.954 18:11:43 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng1n1[mcl]="128"' 00:13:32.954 18:11:43 nvme_fdp -- nvme/functions.sh@23 -- # ng1n1[mcl]=128 00:13:32.954 18:11:43 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:32.954 18:11:43 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:32.954 18:11:43 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 127 ]] 00:13:32.954 18:11:43 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng1n1[msrc]="127"' 00:13:32.954 18:11:43 nvme_fdp -- nvme/functions.sh@23 -- # ng1n1[msrc]=127 00:13:32.954 18:11:43 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:32.954 18:11:43 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:32.954 18:11:43 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:32.954 18:11:43 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng1n1[nulbaf]="0"' 00:13:32.954 18:11:43 nvme_fdp -- nvme/functions.sh@23 -- # ng1n1[nulbaf]=0 00:13:32.954 18:11:43 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:32.954 18:11:43 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:32.954 18:11:43 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:32.954 18:11:43 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng1n1[anagrpid]="0"' 00:13:32.954 18:11:43 nvme_fdp -- nvme/functions.sh@23 -- # ng1n1[anagrpid]=0 00:13:32.954 18:11:43 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:32.954 18:11:43 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:32.954 18:11:43 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:32.954 18:11:43 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng1n1[nsattr]="0"' 00:13:32.954 18:11:43 nvme_fdp -- nvme/functions.sh@23 -- # ng1n1[nsattr]=0 00:13:32.954 18:11:43 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:32.954 18:11:43 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:32.954 18:11:43 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:32.954 18:11:43 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng1n1[nvmsetid]="0"' 00:13:32.954 18:11:43 nvme_fdp -- nvme/functions.sh@23 -- # ng1n1[nvmsetid]=0 00:13:32.954 18:11:43 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:32.954 18:11:43 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:32.954 18:11:43 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:32.954 18:11:43 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng1n1[endgid]="0"' 00:13:32.954 18:11:43 nvme_fdp -- nvme/functions.sh@23 -- # ng1n1[endgid]=0 00:13:32.954 18:11:43 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:32.954 18:11:43 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:32.954 18:11:43 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 00000000000000000000000000000000 ]] 00:13:32.954 18:11:43 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng1n1[nguid]="00000000000000000000000000000000"' 00:13:32.954 18:11:43 nvme_fdp -- nvme/functions.sh@23 -- # ng1n1[nguid]=00000000000000000000000000000000 00:13:32.954 18:11:43 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:32.954 18:11:43 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:32.954 18:11:43 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0000000000000000 ]] 00:13:32.954 18:11:43 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng1n1[eui64]="0000000000000000"' 00:13:32.954 18:11:43 nvme_fdp -- nvme/functions.sh@23 -- # ng1n1[eui64]=0000000000000000 00:13:32.954 18:11:43 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:32.954 18:11:43 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:32.954 18:11:43 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:9 rp:0 ]] 00:13:32.954 18:11:43 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng1n1[lbaf0]="ms:0 lbads:9 rp:0 "' 00:13:32.954 18:11:43 nvme_fdp -- nvme/functions.sh@23 -- # ng1n1[lbaf0]='ms:0 lbads:9 rp:0 ' 00:13:32.954 18:11:43 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:32.954 18:11:43 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:32.954 18:11:43 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:9 rp:0 ]] 00:13:32.954 18:11:43 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng1n1[lbaf1]="ms:8 lbads:9 rp:0 "' 00:13:32.954 18:11:43 nvme_fdp -- nvme/functions.sh@23 -- # ng1n1[lbaf1]='ms:8 lbads:9 rp:0 ' 00:13:32.954 18:11:43 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:32.954 18:11:43 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:32.954 18:11:43 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:9 rp:0 ]] 00:13:32.954 18:11:43 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng1n1[lbaf2]="ms:16 lbads:9 rp:0 "' 00:13:32.954 18:11:43 nvme_fdp -- nvme/functions.sh@23 -- # ng1n1[lbaf2]='ms:16 lbads:9 rp:0 ' 00:13:32.954 18:11:43 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:32.954 18:11:43 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:32.954 18:11:43 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:9 rp:0 ]] 00:13:32.954 18:11:43 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng1n1[lbaf3]="ms:64 lbads:9 rp:0 "' 00:13:32.954 18:11:43 nvme_fdp -- nvme/functions.sh@23 -- # ng1n1[lbaf3]='ms:64 lbads:9 rp:0 ' 00:13:32.954 18:11:43 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:32.954 18:11:43 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:32.954 18:11:43 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:12 rp:0 ]] 00:13:32.954 18:11:43 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng1n1[lbaf4]="ms:0 lbads:12 rp:0 "' 00:13:32.954 18:11:43 nvme_fdp -- nvme/functions.sh@23 -- # ng1n1[lbaf4]='ms:0 lbads:12 rp:0 ' 00:13:32.954 18:11:43 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:32.954 18:11:43 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:32.954 18:11:43 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:12 rp:0 ]] 00:13:32.954 18:11:43 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng1n1[lbaf5]="ms:8 lbads:12 rp:0 "' 00:13:32.954 18:11:43 nvme_fdp -- nvme/functions.sh@23 -- # ng1n1[lbaf5]='ms:8 lbads:12 rp:0 ' 00:13:32.954 18:11:43 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:32.954 18:11:43 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:32.954 18:11:43 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:12 rp:0 ]] 00:13:32.954 18:11:43 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng1n1[lbaf6]="ms:16 lbads:12 rp:0 "' 00:13:32.954 18:11:43 nvme_fdp -- nvme/functions.sh@23 -- # ng1n1[lbaf6]='ms:16 lbads:12 rp:0 ' 00:13:32.954 18:11:43 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:32.954 18:11:43 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:32.954 18:11:43 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:12 rp:0 (in use) ]] 00:13:32.954 18:11:43 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng1n1[lbaf7]="ms:64 lbads:12 rp:0 (in use)"' 00:13:32.954 18:11:43 nvme_fdp -- nvme/functions.sh@23 -- # ng1n1[lbaf7]='ms:64 lbads:12 rp:0 (in use)' 00:13:32.954 18:11:43 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:32.954 18:11:43 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:32.954 18:11:43 nvme_fdp -- nvme/functions.sh@58 -- # _ctrl_ns[${ns##*n}]=ng1n1 00:13:32.954 18:11:43 nvme_fdp -- nvme/functions.sh@54 -- # for ns in "$ctrl/"@("ng${ctrl##*nvme}"|"${ctrl##*/}n")* 00:13:32.954 18:11:43 nvme_fdp -- nvme/functions.sh@55 -- # [[ -e /sys/class/nvme/nvme1/nvme1n1 ]] 00:13:32.954 18:11:43 nvme_fdp -- nvme/functions.sh@56 -- # ns_dev=nvme1n1 00:13:32.954 18:11:43 nvme_fdp -- nvme/functions.sh@57 -- # nvme_get nvme1n1 id-ns /dev/nvme1n1 00:13:32.954 18:11:43 nvme_fdp -- nvme/functions.sh@17 -- # local ref=nvme1n1 reg val 00:13:32.954 18:11:43 nvme_fdp -- nvme/functions.sh@18 -- # shift 00:13:32.954 18:11:43 nvme_fdp -- nvme/functions.sh@20 -- # local -gA 'nvme1n1=()' 00:13:32.954 18:11:43 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:32.954 18:11:43 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:32.954 18:11:43 nvme_fdp -- nvme/functions.sh@16 -- # /usr/local/src/nvme-cli/nvme id-ns /dev/nvme1n1 00:13:32.954 18:11:43 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n '' ]] 00:13:32.954 18:11:43 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:32.954 18:11:43 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:32.954 18:11:43 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x17a17a ]] 00:13:32.954 18:11:43 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[nsze]="0x17a17a"' 00:13:32.954 18:11:43 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[nsze]=0x17a17a 00:13:32.954 18:11:43 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:32.954 18:11:43 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:32.954 18:11:43 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x17a17a ]] 00:13:32.954 18:11:43 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[ncap]="0x17a17a"' 00:13:32.954 18:11:43 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[ncap]=0x17a17a 00:13:32.954 18:11:43 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:32.954 18:11:43 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:32.954 18:11:43 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x17a17a ]] 00:13:32.954 18:11:43 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[nuse]="0x17a17a"' 00:13:32.954 18:11:43 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[nuse]=0x17a17a 00:13:32.954 18:11:43 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:32.954 18:11:43 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:32.954 18:11:43 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x14 ]] 00:13:32.954 18:11:43 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[nsfeat]="0x14"' 00:13:32.954 18:11:43 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[nsfeat]=0x14 00:13:32.954 18:11:43 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:32.954 18:11:43 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:32.954 18:11:43 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 7 ]] 00:13:32.954 18:11:43 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[nlbaf]="7"' 00:13:32.954 18:11:43 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[nlbaf]=7 00:13:32.954 18:11:43 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:32.955 18:11:43 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:32.955 18:11:43 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x7 ]] 00:13:32.955 18:11:43 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[flbas]="0x7"' 00:13:32.955 18:11:43 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[flbas]=0x7 00:13:32.955 18:11:43 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:32.955 18:11:43 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:32.955 18:11:43 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:13:32.955 18:11:43 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[mc]="0x3"' 00:13:32.955 18:11:43 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[mc]=0x3 00:13:32.955 18:11:43 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:32.955 18:11:43 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:32.955 18:11:43 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x1f ]] 00:13:32.955 18:11:43 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[dpc]="0x1f"' 00:13:32.955 18:11:43 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[dpc]=0x1f 00:13:32.955 18:11:43 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:32.955 18:11:43 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:32.955 18:11:43 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:32.955 18:11:43 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[dps]="0"' 00:13:32.955 18:11:43 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[dps]=0 00:13:32.955 18:11:43 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:32.955 18:11:43 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:32.955 18:11:43 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:32.955 18:11:43 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[nmic]="0"' 00:13:32.955 18:11:43 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[nmic]=0 00:13:32.955 18:11:43 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:32.955 18:11:43 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:32.955 18:11:43 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:32.955 18:11:43 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[rescap]="0"' 00:13:32.955 18:11:43 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[rescap]=0 00:13:32.955 18:11:43 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:32.955 18:11:43 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:32.955 18:11:43 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:32.955 18:11:43 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[fpi]="0"' 00:13:32.955 18:11:43 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[fpi]=0 00:13:32.955 18:11:43 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:32.955 18:11:43 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:32.955 18:11:43 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:13:32.955 18:11:43 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[dlfeat]="1"' 00:13:32.955 18:11:43 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[dlfeat]=1 00:13:32.955 18:11:43 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:32.955 18:11:43 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:32.955 18:11:43 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:32.955 18:11:43 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[nawun]="0"' 00:13:32.955 18:11:43 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[nawun]=0 00:13:32.955 18:11:43 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:32.955 18:11:43 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:32.955 18:11:43 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:32.955 18:11:43 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[nawupf]="0"' 00:13:32.955 18:11:43 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[nawupf]=0 00:13:32.955 18:11:43 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:32.955 18:11:43 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:32.955 18:11:43 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:32.955 18:11:43 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[nacwu]="0"' 00:13:32.955 18:11:43 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[nacwu]=0 00:13:32.955 18:11:43 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:32.955 18:11:43 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:32.955 18:11:43 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:32.955 18:11:43 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[nabsn]="0"' 00:13:32.955 18:11:43 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[nabsn]=0 00:13:32.955 18:11:43 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:32.955 18:11:43 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:32.955 18:11:43 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:32.955 18:11:43 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[nabo]="0"' 00:13:32.955 18:11:43 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[nabo]=0 00:13:32.955 18:11:43 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:32.955 18:11:43 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:32.955 18:11:43 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:32.955 18:11:43 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[nabspf]="0"' 00:13:32.955 18:11:43 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[nabspf]=0 00:13:32.955 18:11:43 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:32.955 18:11:43 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:32.955 18:11:43 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:32.955 18:11:43 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[noiob]="0"' 00:13:32.955 18:11:43 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[noiob]=0 00:13:32.955 18:11:43 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:32.955 18:11:43 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:32.955 18:11:43 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:32.955 18:11:43 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[nvmcap]="0"' 00:13:32.955 18:11:43 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[nvmcap]=0 00:13:32.955 18:11:43 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:32.955 18:11:43 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:32.955 18:11:43 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:32.955 18:11:43 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[npwg]="0"' 00:13:32.955 18:11:43 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[npwg]=0 00:13:32.955 18:11:43 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:32.955 18:11:43 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:32.955 18:11:43 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:32.955 18:11:43 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[npwa]="0"' 00:13:32.955 18:11:43 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[npwa]=0 00:13:32.955 18:11:43 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:32.955 18:11:43 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:32.955 18:11:43 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:32.955 18:11:43 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[npdg]="0"' 00:13:32.955 18:11:43 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[npdg]=0 00:13:32.955 18:11:43 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:32.955 18:11:43 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:32.955 18:11:43 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:32.955 18:11:43 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[npda]="0"' 00:13:32.955 18:11:43 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[npda]=0 00:13:32.955 18:11:43 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:32.955 18:11:43 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:32.955 18:11:43 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:32.955 18:11:43 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[nows]="0"' 00:13:32.955 18:11:43 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[nows]=0 00:13:32.955 18:11:43 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:32.955 18:11:43 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:32.955 18:11:43 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:13:32.955 18:11:43 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[mssrl]="128"' 00:13:32.955 18:11:43 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[mssrl]=128 00:13:32.955 18:11:43 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:32.955 18:11:43 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:32.955 18:11:43 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:13:32.955 18:11:43 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[mcl]="128"' 00:13:32.955 18:11:43 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[mcl]=128 00:13:32.955 18:11:43 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:32.955 18:11:43 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:32.955 18:11:43 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 127 ]] 00:13:32.955 18:11:43 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[msrc]="127"' 00:13:32.955 18:11:43 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[msrc]=127 00:13:32.955 18:11:43 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:32.955 18:11:43 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:32.955 18:11:43 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:32.955 18:11:43 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[nulbaf]="0"' 00:13:32.955 18:11:43 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[nulbaf]=0 00:13:32.955 18:11:43 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:32.955 18:11:43 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:32.955 18:11:43 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:32.955 18:11:43 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[anagrpid]="0"' 00:13:32.955 18:11:43 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[anagrpid]=0 00:13:32.955 18:11:43 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:32.955 18:11:43 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:32.955 18:11:43 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:32.955 18:11:43 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[nsattr]="0"' 00:13:32.955 18:11:43 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[nsattr]=0 00:13:32.955 18:11:43 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:32.955 18:11:43 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:32.955 18:11:43 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:32.955 18:11:43 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[nvmsetid]="0"' 00:13:32.955 18:11:43 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[nvmsetid]=0 00:13:32.955 18:11:43 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:32.955 18:11:43 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:32.955 18:11:43 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:32.955 18:11:43 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[endgid]="0"' 00:13:32.955 18:11:43 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[endgid]=0 00:13:32.955 18:11:43 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:32.955 18:11:43 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:32.955 18:11:43 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 00000000000000000000000000000000 ]] 00:13:32.955 18:11:43 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[nguid]="00000000000000000000000000000000"' 00:13:32.955 18:11:43 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[nguid]=00000000000000000000000000000000 00:13:32.955 18:11:43 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:32.955 18:11:43 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:32.955 18:11:43 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0000000000000000 ]] 00:13:32.955 18:11:43 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[eui64]="0000000000000000"' 00:13:32.955 18:11:43 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[eui64]=0000000000000000 00:13:32.955 18:11:43 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:32.955 18:11:43 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:32.955 18:11:43 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:9 rp:0 ]] 00:13:32.955 18:11:43 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[lbaf0]="ms:0 lbads:9 rp:0 "' 00:13:32.955 18:11:43 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[lbaf0]='ms:0 lbads:9 rp:0 ' 00:13:32.955 18:11:43 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:32.955 18:11:43 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:32.955 18:11:43 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:9 rp:0 ]] 00:13:32.955 18:11:43 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[lbaf1]="ms:8 lbads:9 rp:0 "' 00:13:32.955 18:11:43 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[lbaf1]='ms:8 lbads:9 rp:0 ' 00:13:32.955 18:11:43 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:32.955 18:11:43 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:32.955 18:11:43 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:9 rp:0 ]] 00:13:32.955 18:11:43 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[lbaf2]="ms:16 lbads:9 rp:0 "' 00:13:32.955 18:11:43 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[lbaf2]='ms:16 lbads:9 rp:0 ' 00:13:32.955 18:11:43 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:32.955 18:11:43 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:32.955 18:11:43 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:9 rp:0 ]] 00:13:32.955 18:11:43 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[lbaf3]="ms:64 lbads:9 rp:0 "' 00:13:32.955 18:11:43 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[lbaf3]='ms:64 lbads:9 rp:0 ' 00:13:32.955 18:11:43 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:32.955 18:11:43 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:32.955 18:11:43 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:12 rp:0 ]] 00:13:32.955 18:11:43 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[lbaf4]="ms:0 lbads:12 rp:0 "' 00:13:32.955 18:11:43 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[lbaf4]='ms:0 lbads:12 rp:0 ' 00:13:32.955 18:11:43 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:32.955 18:11:43 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:32.955 18:11:43 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:12 rp:0 ]] 00:13:32.955 18:11:43 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[lbaf5]="ms:8 lbads:12 rp:0 "' 00:13:32.955 18:11:43 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[lbaf5]='ms:8 lbads:12 rp:0 ' 00:13:32.955 18:11:43 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:32.955 18:11:43 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:32.955 18:11:43 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:12 rp:0 ]] 00:13:32.955 18:11:43 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[lbaf6]="ms:16 lbads:12 rp:0 "' 00:13:32.955 18:11:43 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[lbaf6]='ms:16 lbads:12 rp:0 ' 00:13:32.955 18:11:43 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:32.955 18:11:43 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:32.955 18:11:43 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:12 rp:0 (in use) ]] 00:13:32.955 18:11:43 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[lbaf7]="ms:64 lbads:12 rp:0 (in use)"' 00:13:32.955 18:11:43 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[lbaf7]='ms:64 lbads:12 rp:0 (in use)' 00:13:32.955 18:11:43 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:32.955 18:11:43 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:32.955 18:11:43 nvme_fdp -- nvme/functions.sh@58 -- # _ctrl_ns[${ns##*n}]=nvme1n1 00:13:32.955 18:11:43 nvme_fdp -- nvme/functions.sh@60 -- # ctrls["$ctrl_dev"]=nvme1 00:13:32.955 18:11:43 nvme_fdp -- nvme/functions.sh@61 -- # nvmes["$ctrl_dev"]=nvme1_ns 00:13:32.955 18:11:43 nvme_fdp -- nvme/functions.sh@62 -- # bdfs["$ctrl_dev"]=0000:00:10.0 00:13:32.955 18:11:43 nvme_fdp -- nvme/functions.sh@63 -- # ordered_ctrls[${ctrl_dev/nvme/}]=nvme1 00:13:32.955 18:11:43 nvme_fdp -- nvme/functions.sh@47 -- # for ctrl in /sys/class/nvme/nvme* 00:13:32.955 18:11:43 nvme_fdp -- nvme/functions.sh@48 -- # [[ -e /sys/class/nvme/nvme2 ]] 00:13:32.955 18:11:43 nvme_fdp -- nvme/functions.sh@49 -- # pci=0000:00:12.0 00:13:32.956 18:11:43 nvme_fdp -- nvme/functions.sh@50 -- # pci_can_use 0000:00:12.0 00:13:32.956 18:11:43 nvme_fdp -- scripts/common.sh@18 -- # local i 00:13:32.956 18:11:43 nvme_fdp -- scripts/common.sh@21 -- # [[ =~ 0000:00:12.0 ]] 00:13:32.956 18:11:43 nvme_fdp -- scripts/common.sh@25 -- # [[ -z '' ]] 00:13:32.956 18:11:43 nvme_fdp -- scripts/common.sh@27 -- # return 0 00:13:32.956 18:11:43 nvme_fdp -- nvme/functions.sh@51 -- # ctrl_dev=nvme2 00:13:32.956 18:11:43 nvme_fdp -- nvme/functions.sh@52 -- # nvme_get nvme2 id-ctrl /dev/nvme2 00:13:32.956 18:11:43 nvme_fdp -- nvme/functions.sh@17 -- # local ref=nvme2 reg val 00:13:32.956 18:11:43 nvme_fdp -- nvme/functions.sh@18 -- # shift 00:13:32.956 18:11:43 nvme_fdp -- nvme/functions.sh@20 -- # local -gA 'nvme2=()' 00:13:32.956 18:11:43 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:32.956 18:11:43 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:32.956 18:11:43 nvme_fdp -- nvme/functions.sh@16 -- # /usr/local/src/nvme-cli/nvme id-ctrl /dev/nvme2 00:13:32.956 18:11:43 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n '' ]] 00:13:32.956 18:11:43 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:32.956 18:11:43 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:32.956 18:11:43 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x1b36 ]] 00:13:32.956 18:11:43 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[vid]="0x1b36"' 00:13:32.956 18:11:43 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[vid]=0x1b36 00:13:32.956 18:11:43 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:32.956 18:11:43 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:32.956 18:11:43 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x1af4 ]] 00:13:32.956 18:11:43 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[ssvid]="0x1af4"' 00:13:32.956 18:11:43 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[ssvid]=0x1af4 00:13:32.956 18:11:43 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:32.956 18:11:43 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:32.956 18:11:43 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 12342 ]] 00:13:32.956 18:11:43 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[sn]="12342 "' 00:13:32.956 18:11:43 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[sn]='12342 ' 00:13:32.956 18:11:43 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:32.956 18:11:43 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:32.956 18:11:43 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n QEMU NVMe Ctrl ]] 00:13:32.956 18:11:43 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[mn]="QEMU NVMe Ctrl "' 00:13:32.956 18:11:43 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[mn]='QEMU NVMe Ctrl ' 00:13:32.956 18:11:43 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:32.956 18:11:43 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:32.956 18:11:43 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 8.0.0 ]] 00:13:32.956 18:11:43 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[fr]="8.0.0 "' 00:13:32.956 18:11:43 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[fr]='8.0.0 ' 00:13:32.956 18:11:43 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:32.956 18:11:43 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:32.956 18:11:43 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 6 ]] 00:13:32.956 18:11:43 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[rab]="6"' 00:13:32.956 18:11:43 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[rab]=6 00:13:32.956 18:11:43 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:32.956 18:11:43 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:32.956 18:11:43 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 525400 ]] 00:13:32.956 18:11:43 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[ieee]="525400"' 00:13:32.956 18:11:43 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[ieee]=525400 00:13:32.956 18:11:43 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:32.956 18:11:43 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:32.956 18:11:43 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:32.956 18:11:43 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[cmic]="0"' 00:13:32.956 18:11:43 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[cmic]=0 00:13:32.956 18:11:43 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:32.956 18:11:43 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:32.956 18:11:43 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 7 ]] 00:13:32.956 18:11:43 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[mdts]="7"' 00:13:32.956 18:11:43 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[mdts]=7 00:13:32.956 18:11:43 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:32.956 18:11:43 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:32.956 18:11:43 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:32.956 18:11:43 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[cntlid]="0"' 00:13:32.956 18:11:43 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[cntlid]=0 00:13:32.956 18:11:43 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:32.956 18:11:43 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:32.956 18:11:43 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x10400 ]] 00:13:32.956 18:11:43 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[ver]="0x10400"' 00:13:32.956 18:11:43 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[ver]=0x10400 00:13:32.956 18:11:43 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:32.956 18:11:43 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:32.956 18:11:43 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:32.956 18:11:43 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[rtd3r]="0"' 00:13:32.956 18:11:43 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[rtd3r]=0 00:13:32.956 18:11:43 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:32.956 18:11:43 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:32.956 18:11:43 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:32.956 18:11:43 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[rtd3e]="0"' 00:13:32.956 18:11:43 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[rtd3e]=0 00:13:32.956 18:11:43 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:32.956 18:11:43 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:32.956 18:11:43 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x100 ]] 00:13:32.956 18:11:43 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[oaes]="0x100"' 00:13:32.956 18:11:43 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[oaes]=0x100 00:13:32.956 18:11:43 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:32.956 18:11:43 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:32.956 18:11:43 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x8000 ]] 00:13:32.956 18:11:43 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[ctratt]="0x8000"' 00:13:32.956 18:11:43 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[ctratt]=0x8000 00:13:32.956 18:11:43 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:32.956 18:11:43 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:32.956 18:11:43 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:32.956 18:11:43 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[rrls]="0"' 00:13:32.956 18:11:43 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[rrls]=0 00:13:32.956 18:11:43 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:32.956 18:11:43 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:32.956 18:11:43 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:13:32.956 18:11:43 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[cntrltype]="1"' 00:13:32.956 18:11:43 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[cntrltype]=1 00:13:32.956 18:11:43 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:32.956 18:11:43 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:32.956 18:11:43 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 00000000-0000-0000-0000-000000000000 ]] 00:13:32.956 18:11:43 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[fguid]="00000000-0000-0000-0000-000000000000"' 00:13:32.956 18:11:43 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[fguid]=00000000-0000-0000-0000-000000000000 00:13:32.956 18:11:43 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:32.956 18:11:43 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:32.956 18:11:43 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:32.956 18:11:43 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[crdt1]="0"' 00:13:32.956 18:11:43 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[crdt1]=0 00:13:32.956 18:11:43 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:32.956 18:11:43 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:32.956 18:11:43 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:32.956 18:11:43 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[crdt2]="0"' 00:13:32.956 18:11:43 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[crdt2]=0 00:13:32.956 18:11:43 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:32.956 18:11:43 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:32.956 18:11:43 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:32.956 18:11:43 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[crdt3]="0"' 00:13:32.956 18:11:43 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[crdt3]=0 00:13:32.956 18:11:43 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:32.956 18:11:43 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:32.956 18:11:43 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:32.956 18:11:43 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[nvmsr]="0"' 00:13:32.956 18:11:43 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[nvmsr]=0 00:13:32.956 18:11:43 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:32.956 18:11:43 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:32.956 18:11:43 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:32.956 18:11:43 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[vwci]="0"' 00:13:32.956 18:11:43 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[vwci]=0 00:13:32.956 18:11:43 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:32.956 18:11:43 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:32.956 18:11:43 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:32.956 18:11:43 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[mec]="0"' 00:13:32.956 18:11:43 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[mec]=0 00:13:32.956 18:11:43 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:32.956 18:11:43 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:32.956 18:11:43 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x12a ]] 00:13:32.956 18:11:43 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[oacs]="0x12a"' 00:13:32.956 18:11:43 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[oacs]=0x12a 00:13:32.956 18:11:43 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:32.956 18:11:43 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:32.956 18:11:43 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 3 ]] 00:13:32.956 18:11:43 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[acl]="3"' 00:13:32.956 18:11:43 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[acl]=3 00:13:32.956 18:11:43 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:32.956 18:11:43 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:32.956 18:11:43 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 3 ]] 00:13:32.956 18:11:43 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[aerl]="3"' 00:13:32.956 18:11:43 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[aerl]=3 00:13:32.956 18:11:43 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:32.956 18:11:43 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:32.956 18:11:43 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:13:32.956 18:11:43 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[frmw]="0x3"' 00:13:32.956 18:11:43 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[frmw]=0x3 00:13:32.956 18:11:43 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:32.956 18:11:43 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:32.956 18:11:43 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x7 ]] 00:13:32.956 18:11:43 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[lpa]="0x7"' 00:13:32.956 18:11:43 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[lpa]=0x7 00:13:32.956 18:11:43 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:32.956 18:11:43 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:32.956 18:11:43 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:32.956 18:11:43 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[elpe]="0"' 00:13:32.956 18:11:43 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[elpe]=0 00:13:32.956 18:11:43 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:32.956 18:11:43 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:32.956 18:11:43 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:32.956 18:11:43 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[npss]="0"' 00:13:32.956 18:11:43 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[npss]=0 00:13:32.956 18:11:43 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:32.956 18:11:43 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:32.956 18:11:43 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:32.956 18:11:43 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[avscc]="0"' 00:13:32.956 18:11:43 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[avscc]=0 00:13:32.956 18:11:43 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:32.956 18:11:43 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:32.956 18:11:43 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:32.956 18:11:43 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[apsta]="0"' 00:13:32.956 18:11:43 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[apsta]=0 00:13:32.956 18:11:43 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:32.956 18:11:43 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:32.956 18:11:43 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 343 ]] 00:13:32.956 18:11:43 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[wctemp]="343"' 00:13:32.956 18:11:43 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[wctemp]=343 00:13:32.956 18:11:43 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:32.956 18:11:43 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:32.956 18:11:43 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 373 ]] 00:13:32.956 18:11:43 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[cctemp]="373"' 00:13:32.956 18:11:43 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[cctemp]=373 00:13:32.956 18:11:43 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:32.956 18:11:43 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:32.956 18:11:43 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:32.956 18:11:43 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[mtfa]="0"' 00:13:32.956 18:11:43 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[mtfa]=0 00:13:32.956 18:11:43 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:32.956 18:11:43 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:32.956 18:11:43 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:32.956 18:11:43 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[hmpre]="0"' 00:13:32.956 18:11:43 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[hmpre]=0 00:13:32.956 18:11:43 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:32.956 18:11:43 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:32.956 18:11:43 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:32.956 18:11:43 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[hmmin]="0"' 00:13:32.956 18:11:43 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[hmmin]=0 00:13:32.956 18:11:43 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:32.956 18:11:43 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:32.956 18:11:43 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:32.956 18:11:43 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[tnvmcap]="0"' 00:13:32.956 18:11:43 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[tnvmcap]=0 00:13:32.956 18:11:43 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:32.956 18:11:43 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:32.956 18:11:43 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:32.956 18:11:43 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[unvmcap]="0"' 00:13:32.957 18:11:43 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[unvmcap]=0 00:13:32.957 18:11:43 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:32.957 18:11:43 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:32.957 18:11:43 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:32.957 18:11:43 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[rpmbs]="0"' 00:13:32.957 18:11:43 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[rpmbs]=0 00:13:32.957 18:11:43 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:32.957 18:11:43 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:32.957 18:11:43 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:32.957 18:11:43 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[edstt]="0"' 00:13:32.957 18:11:43 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[edstt]=0 00:13:32.957 18:11:43 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:32.957 18:11:43 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:32.957 18:11:43 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:32.957 18:11:43 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[dsto]="0"' 00:13:32.957 18:11:43 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[dsto]=0 00:13:32.957 18:11:43 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:32.957 18:11:43 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:32.957 18:11:43 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:32.957 18:11:43 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[fwug]="0"' 00:13:32.957 18:11:43 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[fwug]=0 00:13:32.957 18:11:43 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:32.957 18:11:43 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:32.957 18:11:43 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:32.957 18:11:43 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[kas]="0"' 00:13:32.957 18:11:43 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[kas]=0 00:13:32.957 18:11:43 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:32.957 18:11:43 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:32.957 18:11:43 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:32.957 18:11:43 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[hctma]="0"' 00:13:32.957 18:11:43 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[hctma]=0 00:13:32.957 18:11:43 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:32.957 18:11:43 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:32.957 18:11:43 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:32.957 18:11:43 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[mntmt]="0"' 00:13:32.957 18:11:43 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[mntmt]=0 00:13:32.957 18:11:43 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:32.957 18:11:43 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:32.957 18:11:43 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:32.957 18:11:43 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[mxtmt]="0"' 00:13:32.957 18:11:43 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[mxtmt]=0 00:13:32.957 18:11:43 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:32.957 18:11:43 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:32.957 18:11:43 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:32.957 18:11:43 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[sanicap]="0"' 00:13:32.957 18:11:43 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[sanicap]=0 00:13:32.957 18:11:43 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:32.957 18:11:43 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:32.957 18:11:43 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:32.957 18:11:43 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[hmminds]="0"' 00:13:32.957 18:11:43 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[hmminds]=0 00:13:32.957 18:11:43 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:32.957 18:11:43 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:32.957 18:11:43 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:32.957 18:11:43 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[hmmaxd]="0"' 00:13:32.957 18:11:43 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[hmmaxd]=0 00:13:32.957 18:11:43 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:32.957 18:11:43 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:32.957 18:11:43 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:32.957 18:11:43 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[nsetidmax]="0"' 00:13:32.957 18:11:43 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[nsetidmax]=0 00:13:32.957 18:11:43 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:32.957 18:11:43 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:32.957 18:11:43 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:32.957 18:11:43 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[endgidmax]="0"' 00:13:32.957 18:11:43 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[endgidmax]=0 00:13:32.957 18:11:43 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:32.957 18:11:43 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:32.957 18:11:43 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:32.957 18:11:43 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[anatt]="0"' 00:13:32.957 18:11:43 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[anatt]=0 00:13:32.957 18:11:43 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:32.957 18:11:43 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:32.957 18:11:43 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:32.957 18:11:43 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[anacap]="0"' 00:13:32.957 18:11:43 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[anacap]=0 00:13:32.957 18:11:43 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:32.957 18:11:43 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:32.957 18:11:43 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:32.957 18:11:43 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[anagrpmax]="0"' 00:13:32.957 18:11:43 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[anagrpmax]=0 00:13:32.957 18:11:43 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:32.957 18:11:43 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:32.957 18:11:43 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:32.957 18:11:43 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[nanagrpid]="0"' 00:13:32.957 18:11:43 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[nanagrpid]=0 00:13:32.957 18:11:43 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:32.957 18:11:43 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:32.957 18:11:43 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:32.957 18:11:43 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[pels]="0"' 00:13:32.957 18:11:43 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[pels]=0 00:13:32.957 18:11:43 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:32.957 18:11:43 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:32.957 18:11:43 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:32.957 18:11:43 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[domainid]="0"' 00:13:32.957 18:11:43 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[domainid]=0 00:13:32.957 18:11:43 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:32.957 18:11:43 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:32.957 18:11:43 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:32.957 18:11:43 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[megcap]="0"' 00:13:32.957 18:11:43 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[megcap]=0 00:13:32.957 18:11:43 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:32.957 18:11:43 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:32.957 18:11:43 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x66 ]] 00:13:32.957 18:11:43 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[sqes]="0x66"' 00:13:32.957 18:11:43 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[sqes]=0x66 00:13:32.957 18:11:43 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:32.957 18:11:43 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:32.957 18:11:43 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x44 ]] 00:13:32.957 18:11:43 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[cqes]="0x44"' 00:13:32.957 18:11:43 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[cqes]=0x44 00:13:32.957 18:11:43 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:32.957 18:11:43 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:32.957 18:11:43 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:32.957 18:11:43 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[maxcmd]="0"' 00:13:32.957 18:11:43 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[maxcmd]=0 00:13:32.957 18:11:43 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:32.957 18:11:43 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:32.957 18:11:43 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 256 ]] 00:13:32.957 18:11:43 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[nn]="256"' 00:13:32.957 18:11:43 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[nn]=256 00:13:32.957 18:11:43 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:32.957 18:11:43 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:32.957 18:11:43 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x15d ]] 00:13:32.957 18:11:43 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[oncs]="0x15d"' 00:13:32.957 18:11:43 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[oncs]=0x15d 00:13:32.957 18:11:43 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:32.957 18:11:43 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:32.957 18:11:43 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:32.957 18:11:43 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[fuses]="0"' 00:13:32.957 18:11:43 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[fuses]=0 00:13:32.957 18:11:43 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:32.957 18:11:43 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:32.957 18:11:43 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:32.957 18:11:43 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[fna]="0"' 00:13:32.957 18:11:43 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[fna]=0 00:13:32.957 18:11:43 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:32.957 18:11:43 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:32.957 18:11:43 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x7 ]] 00:13:32.957 18:11:43 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[vwc]="0x7"' 00:13:32.957 18:11:43 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[vwc]=0x7 00:13:32.957 18:11:43 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:32.957 18:11:43 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:32.957 18:11:43 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:32.957 18:11:43 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[awun]="0"' 00:13:32.957 18:11:43 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[awun]=0 00:13:32.957 18:11:43 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:32.957 18:11:43 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:32.957 18:11:43 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:32.957 18:11:43 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[awupf]="0"' 00:13:32.957 18:11:43 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[awupf]=0 00:13:32.957 18:11:43 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:32.957 18:11:43 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:32.957 18:11:43 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:32.957 18:11:43 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[icsvscc]="0"' 00:13:32.957 18:11:43 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[icsvscc]=0 00:13:32.957 18:11:43 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:32.957 18:11:43 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:32.957 18:11:43 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:32.957 18:11:43 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[nwpc]="0"' 00:13:32.957 18:11:43 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[nwpc]=0 00:13:32.957 18:11:43 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:32.957 18:11:43 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:32.957 18:11:43 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:32.957 18:11:43 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[acwu]="0"' 00:13:32.957 18:11:43 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[acwu]=0 00:13:32.957 18:11:43 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:32.957 18:11:43 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:32.957 18:11:43 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:13:32.957 18:11:43 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[ocfs]="0x3"' 00:13:32.957 18:11:43 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[ocfs]=0x3 00:13:32.957 18:11:43 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:32.957 18:11:43 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:32.957 18:11:43 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x1 ]] 00:13:32.957 18:11:43 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[sgls]="0x1"' 00:13:32.957 18:11:43 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[sgls]=0x1 00:13:32.957 18:11:43 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:32.957 18:11:43 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:32.957 18:11:43 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:32.957 18:11:43 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[mnan]="0"' 00:13:32.957 18:11:43 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[mnan]=0 00:13:32.958 18:11:43 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:32.958 18:11:43 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:32.958 18:11:43 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:32.958 18:11:43 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[maxdna]="0"' 00:13:32.958 18:11:43 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[maxdna]=0 00:13:32.958 18:11:43 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:32.958 18:11:43 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:32.958 18:11:43 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:32.958 18:11:43 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[maxcna]="0"' 00:13:32.958 18:11:43 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[maxcna]=0 00:13:32.958 18:11:43 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:32.958 18:11:43 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:32.958 18:11:43 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n nqn.2019-08.org.qemu:12342 ]] 00:13:32.958 18:11:43 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[subnqn]="nqn.2019-08.org.qemu:12342"' 00:13:32.958 18:11:43 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[subnqn]=nqn.2019-08.org.qemu:12342 00:13:32.958 18:11:43 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:32.958 18:11:43 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:32.958 18:11:43 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:32.958 18:11:43 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[ioccsz]="0"' 00:13:32.958 18:11:43 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[ioccsz]=0 00:13:32.958 18:11:43 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:32.958 18:11:43 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:32.958 18:11:43 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:32.958 18:11:43 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[iorcsz]="0"' 00:13:32.958 18:11:43 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[iorcsz]=0 00:13:32.958 18:11:43 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:32.958 18:11:43 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:32.958 18:11:43 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:32.958 18:11:43 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[icdoff]="0"' 00:13:32.958 18:11:43 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[icdoff]=0 00:13:32.958 18:11:43 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:32.958 18:11:43 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:32.958 18:11:43 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:32.958 18:11:43 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[fcatt]="0"' 00:13:32.958 18:11:43 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[fcatt]=0 00:13:32.958 18:11:43 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:32.958 18:11:43 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:32.958 18:11:43 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:32.958 18:11:43 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[msdbd]="0"' 00:13:32.958 18:11:43 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[msdbd]=0 00:13:32.958 18:11:43 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:32.958 18:11:43 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:32.958 18:11:43 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:32.958 18:11:43 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[ofcs]="0"' 00:13:32.958 18:11:43 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[ofcs]=0 00:13:32.958 18:11:43 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:32.958 18:11:43 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:32.958 18:11:43 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0 ]] 00:13:32.958 18:11:43 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[ps0]="mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0"' 00:13:32.958 18:11:43 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[ps0]='mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0' 00:13:32.958 18:11:43 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:32.958 18:11:43 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:32.958 18:11:43 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 rwl:0 idle_power:- active_power:- ]] 00:13:32.958 18:11:43 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[rwt]="0 rwl:0 idle_power:- active_power:-"' 00:13:32.958 18:11:43 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[rwt]='0 rwl:0 idle_power:- active_power:-' 00:13:32.958 18:11:43 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:32.958 18:11:43 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:32.958 18:11:43 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n - ]] 00:13:32.958 18:11:43 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[active_power_workload]="-"' 00:13:32.958 18:11:43 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[active_power_workload]=- 00:13:32.958 18:11:43 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:32.958 18:11:43 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:32.958 18:11:43 nvme_fdp -- nvme/functions.sh@53 -- # local -n _ctrl_ns=nvme2_ns 00:13:32.958 18:11:43 nvme_fdp -- nvme/functions.sh@54 -- # for ns in "$ctrl/"@("ng${ctrl##*nvme}"|"${ctrl##*/}n")* 00:13:32.958 18:11:43 nvme_fdp -- nvme/functions.sh@55 -- # [[ -e /sys/class/nvme/nvme2/ng2n1 ]] 00:13:32.958 18:11:43 nvme_fdp -- nvme/functions.sh@56 -- # ns_dev=ng2n1 00:13:32.958 18:11:43 nvme_fdp -- nvme/functions.sh@57 -- # nvme_get ng2n1 id-ns /dev/ng2n1 00:13:32.958 18:11:43 nvme_fdp -- nvme/functions.sh@17 -- # local ref=ng2n1 reg val 00:13:32.958 18:11:43 nvme_fdp -- nvme/functions.sh@18 -- # shift 00:13:32.958 18:11:43 nvme_fdp -- nvme/functions.sh@20 -- # local -gA 'ng2n1=()' 00:13:32.958 18:11:43 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:32.958 18:11:43 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:32.958 18:11:43 nvme_fdp -- nvme/functions.sh@16 -- # /usr/local/src/nvme-cli/nvme id-ns /dev/ng2n1 00:13:32.958 18:11:43 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n '' ]] 00:13:32.958 18:11:43 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:32.958 18:11:43 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:32.958 18:11:43 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:13:32.958 18:11:43 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n1[nsze]="0x100000"' 00:13:32.958 18:11:43 nvme_fdp -- nvme/functions.sh@23 -- # ng2n1[nsze]=0x100000 00:13:32.958 18:11:43 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:32.958 18:11:43 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:32.958 18:11:43 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:13:32.958 18:11:43 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n1[ncap]="0x100000"' 00:13:32.958 18:11:43 nvme_fdp -- nvme/functions.sh@23 -- # ng2n1[ncap]=0x100000 00:13:32.958 18:11:43 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:32.958 18:11:43 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:32.958 18:11:43 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:13:32.958 18:11:43 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n1[nuse]="0x100000"' 00:13:32.958 18:11:43 nvme_fdp -- nvme/functions.sh@23 -- # ng2n1[nuse]=0x100000 00:13:32.958 18:11:43 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:32.958 18:11:43 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:32.958 18:11:43 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x14 ]] 00:13:32.958 18:11:43 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n1[nsfeat]="0x14"' 00:13:32.958 18:11:43 nvme_fdp -- nvme/functions.sh@23 -- # ng2n1[nsfeat]=0x14 00:13:32.958 18:11:43 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:32.958 18:11:43 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:32.958 18:11:43 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 7 ]] 00:13:32.958 18:11:43 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n1[nlbaf]="7"' 00:13:32.958 18:11:43 nvme_fdp -- nvme/functions.sh@23 -- # ng2n1[nlbaf]=7 00:13:32.958 18:11:43 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:32.958 18:11:43 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:32.958 18:11:43 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x4 ]] 00:13:32.958 18:11:43 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n1[flbas]="0x4"' 00:13:32.958 18:11:43 nvme_fdp -- nvme/functions.sh@23 -- # ng2n1[flbas]=0x4 00:13:32.958 18:11:43 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:32.958 18:11:43 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:32.958 18:11:43 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:13:32.958 18:11:43 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n1[mc]="0x3"' 00:13:32.958 18:11:43 nvme_fdp -- nvme/functions.sh@23 -- # ng2n1[mc]=0x3 00:13:32.958 18:11:43 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:32.958 18:11:43 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:32.958 18:11:43 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x1f ]] 00:13:32.958 18:11:43 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n1[dpc]="0x1f"' 00:13:32.958 18:11:43 nvme_fdp -- nvme/functions.sh@23 -- # ng2n1[dpc]=0x1f 00:13:32.958 18:11:43 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:32.958 18:11:43 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:32.958 18:11:43 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:32.958 18:11:43 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n1[dps]="0"' 00:13:32.958 18:11:43 nvme_fdp -- nvme/functions.sh@23 -- # ng2n1[dps]=0 00:13:32.958 18:11:43 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:32.958 18:11:43 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:32.958 18:11:43 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:32.958 18:11:43 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n1[nmic]="0"' 00:13:32.958 18:11:43 nvme_fdp -- nvme/functions.sh@23 -- # ng2n1[nmic]=0 00:13:32.958 18:11:43 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:32.958 18:11:43 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:32.958 18:11:43 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:32.958 18:11:43 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n1[rescap]="0"' 00:13:32.958 18:11:43 nvme_fdp -- nvme/functions.sh@23 -- # ng2n1[rescap]=0 00:13:32.958 18:11:43 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:32.958 18:11:43 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:32.958 18:11:43 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:32.958 18:11:43 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n1[fpi]="0"' 00:13:32.958 18:11:43 nvme_fdp -- nvme/functions.sh@23 -- # ng2n1[fpi]=0 00:13:32.958 18:11:43 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:32.958 18:11:43 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:32.958 18:11:43 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:13:32.958 18:11:43 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n1[dlfeat]="1"' 00:13:32.958 18:11:43 nvme_fdp -- nvme/functions.sh@23 -- # ng2n1[dlfeat]=1 00:13:32.958 18:11:43 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:32.958 18:11:43 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:32.958 18:11:43 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:32.958 18:11:43 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n1[nawun]="0"' 00:13:32.958 18:11:43 nvme_fdp -- nvme/functions.sh@23 -- # ng2n1[nawun]=0 00:13:32.958 18:11:43 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:32.958 18:11:43 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:32.958 18:11:43 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:32.958 18:11:43 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n1[nawupf]="0"' 00:13:32.958 18:11:43 nvme_fdp -- nvme/functions.sh@23 -- # ng2n1[nawupf]=0 00:13:32.958 18:11:43 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:32.958 18:11:43 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:32.958 18:11:43 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:32.958 18:11:43 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n1[nacwu]="0"' 00:13:32.958 18:11:43 nvme_fdp -- nvme/functions.sh@23 -- # ng2n1[nacwu]=0 00:13:32.958 18:11:43 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:32.958 18:11:43 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:32.958 18:11:43 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:32.958 18:11:43 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n1[nabsn]="0"' 00:13:32.958 18:11:43 nvme_fdp -- nvme/functions.sh@23 -- # ng2n1[nabsn]=0 00:13:32.958 18:11:43 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:32.958 18:11:43 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:32.958 18:11:43 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:32.958 18:11:43 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n1[nabo]="0"' 00:13:32.958 18:11:43 nvme_fdp -- nvme/functions.sh@23 -- # ng2n1[nabo]=0 00:13:32.958 18:11:43 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:32.958 18:11:43 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:32.958 18:11:43 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:32.958 18:11:43 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n1[nabspf]="0"' 00:13:32.958 18:11:43 nvme_fdp -- nvme/functions.sh@23 -- # ng2n1[nabspf]=0 00:13:32.958 18:11:43 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:32.958 18:11:43 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:32.958 18:11:43 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:32.958 18:11:43 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n1[noiob]="0"' 00:13:32.958 18:11:43 nvme_fdp -- nvme/functions.sh@23 -- # ng2n1[noiob]=0 00:13:32.958 18:11:43 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:32.958 18:11:43 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:32.958 18:11:43 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:32.958 18:11:43 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n1[nvmcap]="0"' 00:13:32.958 18:11:43 nvme_fdp -- nvme/functions.sh@23 -- # ng2n1[nvmcap]=0 00:13:32.958 18:11:43 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:32.958 18:11:43 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:32.958 18:11:43 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:32.958 18:11:43 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n1[npwg]="0"' 00:13:32.958 18:11:43 nvme_fdp -- nvme/functions.sh@23 -- # ng2n1[npwg]=0 00:13:32.958 18:11:43 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:32.958 18:11:43 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:32.958 18:11:43 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:32.958 18:11:43 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n1[npwa]="0"' 00:13:32.958 18:11:43 nvme_fdp -- nvme/functions.sh@23 -- # ng2n1[npwa]=0 00:13:32.958 18:11:43 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:32.958 18:11:43 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:32.958 18:11:43 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:32.958 18:11:43 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n1[npdg]="0"' 00:13:32.958 18:11:43 nvme_fdp -- nvme/functions.sh@23 -- # ng2n1[npdg]=0 00:13:32.958 18:11:43 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:32.958 18:11:43 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:32.958 18:11:43 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:32.958 18:11:43 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n1[npda]="0"' 00:13:32.958 18:11:43 nvme_fdp -- nvme/functions.sh@23 -- # ng2n1[npda]=0 00:13:32.958 18:11:43 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:32.958 18:11:43 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:32.958 18:11:43 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:32.958 18:11:43 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n1[nows]="0"' 00:13:32.958 18:11:43 nvme_fdp -- nvme/functions.sh@23 -- # ng2n1[nows]=0 00:13:32.958 18:11:43 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:32.958 18:11:43 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:32.959 18:11:43 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:13:32.959 18:11:43 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n1[mssrl]="128"' 00:13:32.959 18:11:43 nvme_fdp -- nvme/functions.sh@23 -- # ng2n1[mssrl]=128 00:13:32.959 18:11:43 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:32.959 18:11:43 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:32.959 18:11:43 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:13:32.959 18:11:43 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n1[mcl]="128"' 00:13:32.959 18:11:43 nvme_fdp -- nvme/functions.sh@23 -- # ng2n1[mcl]=128 00:13:32.959 18:11:43 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:32.959 18:11:43 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:32.959 18:11:43 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 127 ]] 00:13:32.959 18:11:43 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n1[msrc]="127"' 00:13:32.959 18:11:43 nvme_fdp -- nvme/functions.sh@23 -- # ng2n1[msrc]=127 00:13:32.959 18:11:43 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:32.959 18:11:43 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:32.959 18:11:43 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:32.959 18:11:43 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n1[nulbaf]="0"' 00:13:32.959 18:11:43 nvme_fdp -- nvme/functions.sh@23 -- # ng2n1[nulbaf]=0 00:13:32.959 18:11:43 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:32.959 18:11:43 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:32.959 18:11:43 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:32.959 18:11:43 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n1[anagrpid]="0"' 00:13:32.959 18:11:43 nvme_fdp -- nvme/functions.sh@23 -- # ng2n1[anagrpid]=0 00:13:32.959 18:11:43 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:32.959 18:11:43 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:32.959 18:11:43 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:32.959 18:11:43 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n1[nsattr]="0"' 00:13:32.959 18:11:43 nvme_fdp -- nvme/functions.sh@23 -- # ng2n1[nsattr]=0 00:13:32.959 18:11:43 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:32.959 18:11:43 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:32.959 18:11:43 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:32.959 18:11:43 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n1[nvmsetid]="0"' 00:13:32.959 18:11:43 nvme_fdp -- nvme/functions.sh@23 -- # ng2n1[nvmsetid]=0 00:13:32.959 18:11:43 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:32.959 18:11:43 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:32.959 18:11:43 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:32.959 18:11:43 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n1[endgid]="0"' 00:13:32.959 18:11:43 nvme_fdp -- nvme/functions.sh@23 -- # ng2n1[endgid]=0 00:13:32.959 18:11:43 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:32.959 18:11:43 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:32.959 18:11:43 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 00000000000000000000000000000000 ]] 00:13:32.959 18:11:43 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n1[nguid]="00000000000000000000000000000000"' 00:13:32.959 18:11:43 nvme_fdp -- nvme/functions.sh@23 -- # ng2n1[nguid]=00000000000000000000000000000000 00:13:32.959 18:11:43 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:32.959 18:11:43 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:32.959 18:11:43 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0000000000000000 ]] 00:13:32.959 18:11:43 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n1[eui64]="0000000000000000"' 00:13:32.959 18:11:43 nvme_fdp -- nvme/functions.sh@23 -- # ng2n1[eui64]=0000000000000000 00:13:32.959 18:11:43 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:32.959 18:11:43 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:32.959 18:11:43 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:9 rp:0 ]] 00:13:32.959 18:11:43 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n1[lbaf0]="ms:0 lbads:9 rp:0 "' 00:13:32.959 18:11:43 nvme_fdp -- nvme/functions.sh@23 -- # ng2n1[lbaf0]='ms:0 lbads:9 rp:0 ' 00:13:32.959 18:11:43 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:32.959 18:11:43 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:32.959 18:11:43 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:9 rp:0 ]] 00:13:32.959 18:11:43 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n1[lbaf1]="ms:8 lbads:9 rp:0 "' 00:13:32.959 18:11:43 nvme_fdp -- nvme/functions.sh@23 -- # ng2n1[lbaf1]='ms:8 lbads:9 rp:0 ' 00:13:32.959 18:11:43 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:32.959 18:11:43 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:32.959 18:11:43 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:9 rp:0 ]] 00:13:32.959 18:11:43 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n1[lbaf2]="ms:16 lbads:9 rp:0 "' 00:13:32.959 18:11:43 nvme_fdp -- nvme/functions.sh@23 -- # ng2n1[lbaf2]='ms:16 lbads:9 rp:0 ' 00:13:32.959 18:11:43 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:32.959 18:11:43 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:32.959 18:11:43 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:9 rp:0 ]] 00:13:32.959 18:11:43 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n1[lbaf3]="ms:64 lbads:9 rp:0 "' 00:13:32.959 18:11:43 nvme_fdp -- nvme/functions.sh@23 -- # ng2n1[lbaf3]='ms:64 lbads:9 rp:0 ' 00:13:32.959 18:11:43 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:32.959 18:11:43 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:32.959 18:11:43 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:12 rp:0 (in use) ]] 00:13:32.959 18:11:43 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n1[lbaf4]="ms:0 lbads:12 rp:0 (in use)"' 00:13:32.959 18:11:43 nvme_fdp -- nvme/functions.sh@23 -- # ng2n1[lbaf4]='ms:0 lbads:12 rp:0 (in use)' 00:13:32.959 18:11:43 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:32.959 18:11:43 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:32.959 18:11:43 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:12 rp:0 ]] 00:13:32.959 18:11:43 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n1[lbaf5]="ms:8 lbads:12 rp:0 "' 00:13:32.959 18:11:43 nvme_fdp -- nvme/functions.sh@23 -- # ng2n1[lbaf5]='ms:8 lbads:12 rp:0 ' 00:13:32.959 18:11:43 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:32.959 18:11:43 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:32.959 18:11:43 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:12 rp:0 ]] 00:13:32.959 18:11:43 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n1[lbaf6]="ms:16 lbads:12 rp:0 "' 00:13:32.959 18:11:43 nvme_fdp -- nvme/functions.sh@23 -- # ng2n1[lbaf6]='ms:16 lbads:12 rp:0 ' 00:13:32.959 18:11:43 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:32.959 18:11:43 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:32.959 18:11:43 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:12 rp:0 ]] 00:13:32.959 18:11:43 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n1[lbaf7]="ms:64 lbads:12 rp:0 "' 00:13:32.959 18:11:43 nvme_fdp -- nvme/functions.sh@23 -- # ng2n1[lbaf7]='ms:64 lbads:12 rp:0 ' 00:13:32.959 18:11:43 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:32.959 18:11:43 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:32.959 18:11:43 nvme_fdp -- nvme/functions.sh@58 -- # _ctrl_ns[${ns##*n}]=ng2n1 00:13:32.959 18:11:43 nvme_fdp -- nvme/functions.sh@54 -- # for ns in "$ctrl/"@("ng${ctrl##*nvme}"|"${ctrl##*/}n")* 00:13:32.959 18:11:43 nvme_fdp -- nvme/functions.sh@55 -- # [[ -e /sys/class/nvme/nvme2/ng2n2 ]] 00:13:32.959 18:11:43 nvme_fdp -- nvme/functions.sh@56 -- # ns_dev=ng2n2 00:13:32.959 18:11:43 nvme_fdp -- nvme/functions.sh@57 -- # nvme_get ng2n2 id-ns /dev/ng2n2 00:13:32.959 18:11:43 nvme_fdp -- nvme/functions.sh@17 -- # local ref=ng2n2 reg val 00:13:32.959 18:11:43 nvme_fdp -- nvme/functions.sh@18 -- # shift 00:13:32.959 18:11:43 nvme_fdp -- nvme/functions.sh@20 -- # local -gA 'ng2n2=()' 00:13:32.959 18:11:43 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:32.959 18:11:43 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:32.959 18:11:43 nvme_fdp -- nvme/functions.sh@16 -- # /usr/local/src/nvme-cli/nvme id-ns /dev/ng2n2 00:13:32.959 18:11:43 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n '' ]] 00:13:32.959 18:11:43 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:32.959 18:11:43 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:32.959 18:11:43 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:13:32.959 18:11:43 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n2[nsze]="0x100000"' 00:13:32.959 18:11:43 nvme_fdp -- nvme/functions.sh@23 -- # ng2n2[nsze]=0x100000 00:13:32.959 18:11:43 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:32.959 18:11:43 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:32.959 18:11:43 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:13:32.959 18:11:43 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n2[ncap]="0x100000"' 00:13:32.959 18:11:43 nvme_fdp -- nvme/functions.sh@23 -- # ng2n2[ncap]=0x100000 00:13:32.959 18:11:43 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:32.959 18:11:43 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:32.959 18:11:43 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:13:32.959 18:11:43 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n2[nuse]="0x100000"' 00:13:32.959 18:11:43 nvme_fdp -- nvme/functions.sh@23 -- # ng2n2[nuse]=0x100000 00:13:32.959 18:11:43 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:32.959 18:11:43 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:32.959 18:11:43 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x14 ]] 00:13:32.959 18:11:43 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n2[nsfeat]="0x14"' 00:13:32.959 18:11:43 nvme_fdp -- nvme/functions.sh@23 -- # ng2n2[nsfeat]=0x14 00:13:32.959 18:11:43 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:32.959 18:11:43 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:32.959 18:11:43 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 7 ]] 00:13:32.959 18:11:43 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n2[nlbaf]="7"' 00:13:32.959 18:11:43 nvme_fdp -- nvme/functions.sh@23 -- # ng2n2[nlbaf]=7 00:13:32.959 18:11:43 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:32.959 18:11:43 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:32.959 18:11:43 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x4 ]] 00:13:32.959 18:11:43 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n2[flbas]="0x4"' 00:13:32.959 18:11:43 nvme_fdp -- nvme/functions.sh@23 -- # ng2n2[flbas]=0x4 00:13:32.959 18:11:43 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:32.959 18:11:43 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:32.959 18:11:43 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:13:32.959 18:11:43 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n2[mc]="0x3"' 00:13:32.959 18:11:43 nvme_fdp -- nvme/functions.sh@23 -- # ng2n2[mc]=0x3 00:13:32.959 18:11:43 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:32.959 18:11:43 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:32.959 18:11:43 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x1f ]] 00:13:32.959 18:11:43 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n2[dpc]="0x1f"' 00:13:32.959 18:11:43 nvme_fdp -- nvme/functions.sh@23 -- # ng2n2[dpc]=0x1f 00:13:32.959 18:11:43 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:32.959 18:11:43 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:32.959 18:11:43 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:32.959 18:11:43 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n2[dps]="0"' 00:13:32.959 18:11:43 nvme_fdp -- nvme/functions.sh@23 -- # ng2n2[dps]=0 00:13:32.959 18:11:43 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:32.959 18:11:43 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:32.959 18:11:43 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:32.959 18:11:43 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n2[nmic]="0"' 00:13:32.959 18:11:43 nvme_fdp -- nvme/functions.sh@23 -- # ng2n2[nmic]=0 00:13:32.959 18:11:43 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:32.959 18:11:43 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:32.959 18:11:43 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:32.959 18:11:43 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n2[rescap]="0"' 00:13:32.959 18:11:43 nvme_fdp -- nvme/functions.sh@23 -- # ng2n2[rescap]=0 00:13:32.959 18:11:43 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:32.959 18:11:43 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:32.959 18:11:43 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:32.959 18:11:43 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n2[fpi]="0"' 00:13:32.959 18:11:43 nvme_fdp -- nvme/functions.sh@23 -- # ng2n2[fpi]=0 00:13:32.959 18:11:43 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:32.959 18:11:43 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:32.959 18:11:43 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:13:32.959 18:11:43 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n2[dlfeat]="1"' 00:13:32.959 18:11:43 nvme_fdp -- nvme/functions.sh@23 -- # ng2n2[dlfeat]=1 00:13:32.959 18:11:43 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:32.959 18:11:43 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:32.959 18:11:43 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:32.959 18:11:43 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n2[nawun]="0"' 00:13:32.959 18:11:43 nvme_fdp -- nvme/functions.sh@23 -- # ng2n2[nawun]=0 00:13:32.959 18:11:43 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:32.959 18:11:43 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:32.959 18:11:43 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:32.959 18:11:43 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n2[nawupf]="0"' 00:13:32.959 18:11:43 nvme_fdp -- nvme/functions.sh@23 -- # ng2n2[nawupf]=0 00:13:32.959 18:11:43 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:32.959 18:11:43 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:32.959 18:11:43 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:32.959 18:11:43 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n2[nacwu]="0"' 00:13:32.959 18:11:43 nvme_fdp -- nvme/functions.sh@23 -- # ng2n2[nacwu]=0 00:13:32.959 18:11:43 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:32.959 18:11:43 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:32.959 18:11:43 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:32.959 18:11:43 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n2[nabsn]="0"' 00:13:32.959 18:11:43 nvme_fdp -- nvme/functions.sh@23 -- # ng2n2[nabsn]=0 00:13:32.959 18:11:43 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:32.959 18:11:43 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:32.959 18:11:43 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:32.959 18:11:43 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n2[nabo]="0"' 00:13:32.959 18:11:43 nvme_fdp -- nvme/functions.sh@23 -- # ng2n2[nabo]=0 00:13:32.959 18:11:43 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:32.959 18:11:43 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:32.959 18:11:43 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:32.959 18:11:43 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n2[nabspf]="0"' 00:13:32.959 18:11:43 nvme_fdp -- nvme/functions.sh@23 -- # ng2n2[nabspf]=0 00:13:32.959 18:11:43 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:32.959 18:11:43 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:32.959 18:11:43 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:32.959 18:11:43 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n2[noiob]="0"' 00:13:32.959 18:11:43 nvme_fdp -- nvme/functions.sh@23 -- # ng2n2[noiob]=0 00:13:32.959 18:11:43 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:32.959 18:11:43 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:32.959 18:11:43 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:32.959 18:11:43 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n2[nvmcap]="0"' 00:13:32.959 18:11:43 nvme_fdp -- nvme/functions.sh@23 -- # ng2n2[nvmcap]=0 00:13:32.959 18:11:43 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:32.959 18:11:43 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:32.959 18:11:43 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:32.960 18:11:43 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n2[npwg]="0"' 00:13:32.960 18:11:43 nvme_fdp -- nvme/functions.sh@23 -- # ng2n2[npwg]=0 00:13:32.960 18:11:43 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:32.960 18:11:43 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:32.960 18:11:43 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:32.960 18:11:43 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n2[npwa]="0"' 00:13:32.960 18:11:43 nvme_fdp -- nvme/functions.sh@23 -- # ng2n2[npwa]=0 00:13:32.960 18:11:43 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:32.960 18:11:43 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:32.960 18:11:43 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:32.960 18:11:43 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n2[npdg]="0"' 00:13:32.960 18:11:43 nvme_fdp -- nvme/functions.sh@23 -- # ng2n2[npdg]=0 00:13:32.960 18:11:43 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:32.960 18:11:43 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:32.960 18:11:43 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:32.960 18:11:43 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n2[npda]="0"' 00:13:32.960 18:11:43 nvme_fdp -- nvme/functions.sh@23 -- # ng2n2[npda]=0 00:13:32.960 18:11:43 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:32.960 18:11:43 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:32.960 18:11:43 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:32.960 18:11:43 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n2[nows]="0"' 00:13:32.960 18:11:43 nvme_fdp -- nvme/functions.sh@23 -- # ng2n2[nows]=0 00:13:32.960 18:11:43 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:32.960 18:11:43 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:32.960 18:11:43 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:13:32.960 18:11:43 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n2[mssrl]="128"' 00:13:32.960 18:11:43 nvme_fdp -- nvme/functions.sh@23 -- # ng2n2[mssrl]=128 00:13:32.960 18:11:43 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:32.960 18:11:43 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:32.960 18:11:43 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:13:32.960 18:11:43 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n2[mcl]="128"' 00:13:32.960 18:11:43 nvme_fdp -- nvme/functions.sh@23 -- # ng2n2[mcl]=128 00:13:32.960 18:11:43 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:32.960 18:11:43 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:32.960 18:11:43 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 127 ]] 00:13:32.960 18:11:43 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n2[msrc]="127"' 00:13:32.960 18:11:43 nvme_fdp -- nvme/functions.sh@23 -- # ng2n2[msrc]=127 00:13:32.960 18:11:43 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:32.960 18:11:43 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:32.960 18:11:43 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:32.960 18:11:43 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n2[nulbaf]="0"' 00:13:32.960 18:11:43 nvme_fdp -- nvme/functions.sh@23 -- # ng2n2[nulbaf]=0 00:13:32.960 18:11:43 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:32.960 18:11:43 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:32.960 18:11:43 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:32.960 18:11:43 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n2[anagrpid]="0"' 00:13:32.960 18:11:43 nvme_fdp -- nvme/functions.sh@23 -- # ng2n2[anagrpid]=0 00:13:32.960 18:11:43 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:32.960 18:11:43 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:32.960 18:11:43 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:32.960 18:11:43 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n2[nsattr]="0"' 00:13:32.960 18:11:43 nvme_fdp -- nvme/functions.sh@23 -- # ng2n2[nsattr]=0 00:13:32.960 18:11:43 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:32.960 18:11:43 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:32.960 18:11:43 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:32.960 18:11:43 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n2[nvmsetid]="0"' 00:13:32.960 18:11:43 nvme_fdp -- nvme/functions.sh@23 -- # ng2n2[nvmsetid]=0 00:13:32.960 18:11:43 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:32.960 18:11:43 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:32.960 18:11:43 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:32.960 18:11:43 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n2[endgid]="0"' 00:13:32.960 18:11:43 nvme_fdp -- nvme/functions.sh@23 -- # ng2n2[endgid]=0 00:13:32.960 18:11:43 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:32.960 18:11:43 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:32.960 18:11:43 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 00000000000000000000000000000000 ]] 00:13:32.960 18:11:43 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n2[nguid]="00000000000000000000000000000000"' 00:13:32.960 18:11:43 nvme_fdp -- nvme/functions.sh@23 -- # ng2n2[nguid]=00000000000000000000000000000000 00:13:32.960 18:11:43 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:32.960 18:11:43 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:32.960 18:11:43 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0000000000000000 ]] 00:13:32.960 18:11:43 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n2[eui64]="0000000000000000"' 00:13:32.960 18:11:43 nvme_fdp -- nvme/functions.sh@23 -- # ng2n2[eui64]=0000000000000000 00:13:32.960 18:11:43 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:32.960 18:11:43 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:32.960 18:11:43 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:9 rp:0 ]] 00:13:32.960 18:11:43 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n2[lbaf0]="ms:0 lbads:9 rp:0 "' 00:13:32.960 18:11:43 nvme_fdp -- nvme/functions.sh@23 -- # ng2n2[lbaf0]='ms:0 lbads:9 rp:0 ' 00:13:32.960 18:11:43 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:32.960 18:11:43 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:32.960 18:11:43 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:9 rp:0 ]] 00:13:32.960 18:11:43 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n2[lbaf1]="ms:8 lbads:9 rp:0 "' 00:13:32.960 18:11:43 nvme_fdp -- nvme/functions.sh@23 -- # ng2n2[lbaf1]='ms:8 lbads:9 rp:0 ' 00:13:32.960 18:11:43 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:32.960 18:11:43 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:32.960 18:11:43 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:9 rp:0 ]] 00:13:32.960 18:11:43 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n2[lbaf2]="ms:16 lbads:9 rp:0 "' 00:13:32.960 18:11:43 nvme_fdp -- nvme/functions.sh@23 -- # ng2n2[lbaf2]='ms:16 lbads:9 rp:0 ' 00:13:32.960 18:11:43 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:32.960 18:11:43 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:32.960 18:11:43 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:9 rp:0 ]] 00:13:32.960 18:11:43 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n2[lbaf3]="ms:64 lbads:9 rp:0 "' 00:13:32.960 18:11:43 nvme_fdp -- nvme/functions.sh@23 -- # ng2n2[lbaf3]='ms:64 lbads:9 rp:0 ' 00:13:32.960 18:11:43 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:32.960 18:11:43 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:32.960 18:11:43 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:12 rp:0 (in use) ]] 00:13:32.960 18:11:43 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n2[lbaf4]="ms:0 lbads:12 rp:0 (in use)"' 00:13:32.960 18:11:43 nvme_fdp -- nvme/functions.sh@23 -- # ng2n2[lbaf4]='ms:0 lbads:12 rp:0 (in use)' 00:13:32.960 18:11:43 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:32.960 18:11:43 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:32.960 18:11:43 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:12 rp:0 ]] 00:13:32.960 18:11:43 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n2[lbaf5]="ms:8 lbads:12 rp:0 "' 00:13:32.960 18:11:43 nvme_fdp -- nvme/functions.sh@23 -- # ng2n2[lbaf5]='ms:8 lbads:12 rp:0 ' 00:13:32.960 18:11:43 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:32.960 18:11:43 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:32.960 18:11:43 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:12 rp:0 ]] 00:13:32.960 18:11:43 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n2[lbaf6]="ms:16 lbads:12 rp:0 "' 00:13:32.960 18:11:43 nvme_fdp -- nvme/functions.sh@23 -- # ng2n2[lbaf6]='ms:16 lbads:12 rp:0 ' 00:13:32.960 18:11:43 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:32.960 18:11:43 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:32.960 18:11:43 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:12 rp:0 ]] 00:13:32.960 18:11:43 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n2[lbaf7]="ms:64 lbads:12 rp:0 "' 00:13:32.960 18:11:43 nvme_fdp -- nvme/functions.sh@23 -- # ng2n2[lbaf7]='ms:64 lbads:12 rp:0 ' 00:13:32.960 18:11:43 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:32.960 18:11:43 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:32.960 18:11:43 nvme_fdp -- nvme/functions.sh@58 -- # _ctrl_ns[${ns##*n}]=ng2n2 00:13:32.960 18:11:43 nvme_fdp -- nvme/functions.sh@54 -- # for ns in "$ctrl/"@("ng${ctrl##*nvme}"|"${ctrl##*/}n")* 00:13:32.960 18:11:43 nvme_fdp -- nvme/functions.sh@55 -- # [[ -e /sys/class/nvme/nvme2/ng2n3 ]] 00:13:32.960 18:11:43 nvme_fdp -- nvme/functions.sh@56 -- # ns_dev=ng2n3 00:13:32.960 18:11:43 nvme_fdp -- nvme/functions.sh@57 -- # nvme_get ng2n3 id-ns /dev/ng2n3 00:13:32.960 18:11:43 nvme_fdp -- nvme/functions.sh@17 -- # local ref=ng2n3 reg val 00:13:32.960 18:11:43 nvme_fdp -- nvme/functions.sh@18 -- # shift 00:13:32.960 18:11:43 nvme_fdp -- nvme/functions.sh@20 -- # local -gA 'ng2n3=()' 00:13:32.960 18:11:43 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:32.960 18:11:43 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:32.960 18:11:43 nvme_fdp -- nvme/functions.sh@16 -- # /usr/local/src/nvme-cli/nvme id-ns /dev/ng2n3 00:13:32.960 18:11:43 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n '' ]] 00:13:32.960 18:11:43 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:32.960 18:11:43 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:32.960 18:11:43 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:13:32.960 18:11:43 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n3[nsze]="0x100000"' 00:13:32.960 18:11:43 nvme_fdp -- nvme/functions.sh@23 -- # ng2n3[nsze]=0x100000 00:13:32.960 18:11:43 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:32.960 18:11:43 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:32.960 18:11:43 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:13:32.960 18:11:43 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n3[ncap]="0x100000"' 00:13:32.960 18:11:43 nvme_fdp -- nvme/functions.sh@23 -- # ng2n3[ncap]=0x100000 00:13:32.960 18:11:43 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:32.960 18:11:43 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:32.960 18:11:43 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:13:32.960 18:11:43 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n3[nuse]="0x100000"' 00:13:32.960 18:11:43 nvme_fdp -- nvme/functions.sh@23 -- # ng2n3[nuse]=0x100000 00:13:32.960 18:11:43 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:32.960 18:11:43 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:32.960 18:11:43 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x14 ]] 00:13:32.960 18:11:43 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n3[nsfeat]="0x14"' 00:13:32.960 18:11:43 nvme_fdp -- nvme/functions.sh@23 -- # ng2n3[nsfeat]=0x14 00:13:32.960 18:11:43 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:32.960 18:11:43 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:32.960 18:11:43 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 7 ]] 00:13:32.960 18:11:43 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n3[nlbaf]="7"' 00:13:32.960 18:11:43 nvme_fdp -- nvme/functions.sh@23 -- # ng2n3[nlbaf]=7 00:13:32.960 18:11:43 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:32.960 18:11:43 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:32.960 18:11:43 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x4 ]] 00:13:32.960 18:11:43 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n3[flbas]="0x4"' 00:13:32.960 18:11:43 nvme_fdp -- nvme/functions.sh@23 -- # ng2n3[flbas]=0x4 00:13:32.960 18:11:43 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:32.960 18:11:43 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:32.960 18:11:43 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:13:32.960 18:11:43 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n3[mc]="0x3"' 00:13:32.960 18:11:43 nvme_fdp -- nvme/functions.sh@23 -- # ng2n3[mc]=0x3 00:13:32.960 18:11:43 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:32.960 18:11:43 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:32.960 18:11:43 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x1f ]] 00:13:32.960 18:11:43 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n3[dpc]="0x1f"' 00:13:32.960 18:11:43 nvme_fdp -- nvme/functions.sh@23 -- # ng2n3[dpc]=0x1f 00:13:32.960 18:11:43 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:32.960 18:11:43 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:32.960 18:11:43 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:32.960 18:11:43 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n3[dps]="0"' 00:13:32.960 18:11:43 nvme_fdp -- nvme/functions.sh@23 -- # ng2n3[dps]=0 00:13:32.960 18:11:43 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:32.960 18:11:43 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:32.960 18:11:43 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:32.960 18:11:43 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n3[nmic]="0"' 00:13:32.960 18:11:43 nvme_fdp -- nvme/functions.sh@23 -- # ng2n3[nmic]=0 00:13:32.960 18:11:43 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:32.960 18:11:43 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:32.960 18:11:43 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:32.960 18:11:43 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n3[rescap]="0"' 00:13:32.960 18:11:43 nvme_fdp -- nvme/functions.sh@23 -- # ng2n3[rescap]=0 00:13:32.960 18:11:43 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:32.960 18:11:43 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:32.960 18:11:43 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:32.960 18:11:43 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n3[fpi]="0"' 00:13:32.960 18:11:43 nvme_fdp -- nvme/functions.sh@23 -- # ng2n3[fpi]=0 00:13:32.960 18:11:43 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:32.960 18:11:43 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:32.960 18:11:43 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:13:32.960 18:11:43 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n3[dlfeat]="1"' 00:13:32.960 18:11:43 nvme_fdp -- nvme/functions.sh@23 -- # ng2n3[dlfeat]=1 00:13:32.960 18:11:43 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:32.960 18:11:43 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:32.960 18:11:43 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:32.960 18:11:43 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n3[nawun]="0"' 00:13:32.960 18:11:43 nvme_fdp -- nvme/functions.sh@23 -- # ng2n3[nawun]=0 00:13:32.961 18:11:43 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:32.961 18:11:43 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:32.961 18:11:43 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:32.961 18:11:43 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n3[nawupf]="0"' 00:13:32.961 18:11:43 nvme_fdp -- nvme/functions.sh@23 -- # ng2n3[nawupf]=0 00:13:32.961 18:11:43 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:32.961 18:11:43 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:32.961 18:11:43 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:32.961 18:11:43 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n3[nacwu]="0"' 00:13:32.961 18:11:43 nvme_fdp -- nvme/functions.sh@23 -- # ng2n3[nacwu]=0 00:13:32.961 18:11:43 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:32.961 18:11:43 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:32.961 18:11:43 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:32.961 18:11:43 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n3[nabsn]="0"' 00:13:32.961 18:11:43 nvme_fdp -- nvme/functions.sh@23 -- # ng2n3[nabsn]=0 00:13:32.961 18:11:43 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:32.961 18:11:43 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:32.961 18:11:43 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:32.961 18:11:43 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n3[nabo]="0"' 00:13:32.961 18:11:43 nvme_fdp -- nvme/functions.sh@23 -- # ng2n3[nabo]=0 00:13:32.961 18:11:43 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:32.961 18:11:43 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:32.961 18:11:43 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:32.961 18:11:43 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n3[nabspf]="0"' 00:13:32.961 18:11:43 nvme_fdp -- nvme/functions.sh@23 -- # ng2n3[nabspf]=0 00:13:32.961 18:11:43 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:32.961 18:11:43 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:32.961 18:11:43 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:32.961 18:11:43 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n3[noiob]="0"' 00:13:32.961 18:11:43 nvme_fdp -- nvme/functions.sh@23 -- # ng2n3[noiob]=0 00:13:32.961 18:11:43 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:32.961 18:11:43 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:32.961 18:11:43 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:32.961 18:11:43 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n3[nvmcap]="0"' 00:13:32.961 18:11:43 nvme_fdp -- nvme/functions.sh@23 -- # ng2n3[nvmcap]=0 00:13:32.961 18:11:43 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:32.961 18:11:43 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:32.961 18:11:43 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:32.961 18:11:43 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n3[npwg]="0"' 00:13:32.961 18:11:43 nvme_fdp -- nvme/functions.sh@23 -- # ng2n3[npwg]=0 00:13:32.961 18:11:43 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:32.961 18:11:43 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:32.961 18:11:43 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:32.961 18:11:43 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n3[npwa]="0"' 00:13:32.961 18:11:43 nvme_fdp -- nvme/functions.sh@23 -- # ng2n3[npwa]=0 00:13:32.961 18:11:43 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:32.961 18:11:43 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:32.961 18:11:43 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:32.961 18:11:43 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n3[npdg]="0"' 00:13:32.961 18:11:43 nvme_fdp -- nvme/functions.sh@23 -- # ng2n3[npdg]=0 00:13:32.961 18:11:43 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:32.961 18:11:43 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:32.961 18:11:43 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:32.961 18:11:43 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n3[npda]="0"' 00:13:32.961 18:11:43 nvme_fdp -- nvme/functions.sh@23 -- # ng2n3[npda]=0 00:13:32.961 18:11:43 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:32.961 18:11:43 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:32.961 18:11:43 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:32.961 18:11:43 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n3[nows]="0"' 00:13:32.961 18:11:43 nvme_fdp -- nvme/functions.sh@23 -- # ng2n3[nows]=0 00:13:32.961 18:11:43 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:32.961 18:11:43 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:32.961 18:11:43 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:13:32.961 18:11:43 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n3[mssrl]="128"' 00:13:32.961 18:11:43 nvme_fdp -- nvme/functions.sh@23 -- # ng2n3[mssrl]=128 00:13:32.961 18:11:43 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:32.961 18:11:43 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:32.961 18:11:43 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:13:32.961 18:11:43 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n3[mcl]="128"' 00:13:32.961 18:11:43 nvme_fdp -- nvme/functions.sh@23 -- # ng2n3[mcl]=128 00:13:32.961 18:11:43 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:32.961 18:11:43 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:32.961 18:11:43 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 127 ]] 00:13:32.961 18:11:43 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n3[msrc]="127"' 00:13:32.961 18:11:43 nvme_fdp -- nvme/functions.sh@23 -- # ng2n3[msrc]=127 00:13:32.961 18:11:43 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:32.961 18:11:43 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:32.961 18:11:43 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:32.961 18:11:43 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n3[nulbaf]="0"' 00:13:32.961 18:11:43 nvme_fdp -- nvme/functions.sh@23 -- # ng2n3[nulbaf]=0 00:13:32.961 18:11:43 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:32.961 18:11:43 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:32.961 18:11:43 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:32.961 18:11:43 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n3[anagrpid]="0"' 00:13:32.961 18:11:43 nvme_fdp -- nvme/functions.sh@23 -- # ng2n3[anagrpid]=0 00:13:32.961 18:11:43 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:32.961 18:11:43 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:32.961 18:11:43 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:32.961 18:11:43 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n3[nsattr]="0"' 00:13:32.961 18:11:43 nvme_fdp -- nvme/functions.sh@23 -- # ng2n3[nsattr]=0 00:13:32.961 18:11:43 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:32.961 18:11:43 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:32.961 18:11:43 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:32.961 18:11:43 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n3[nvmsetid]="0"' 00:13:32.961 18:11:43 nvme_fdp -- nvme/functions.sh@23 -- # ng2n3[nvmsetid]=0 00:13:32.961 18:11:43 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:32.961 18:11:43 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:32.961 18:11:43 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:32.961 18:11:43 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n3[endgid]="0"' 00:13:32.961 18:11:43 nvme_fdp -- nvme/functions.sh@23 -- # ng2n3[endgid]=0 00:13:32.961 18:11:43 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:32.961 18:11:43 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:32.961 18:11:43 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 00000000000000000000000000000000 ]] 00:13:32.961 18:11:43 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n3[nguid]="00000000000000000000000000000000"' 00:13:32.961 18:11:43 nvme_fdp -- nvme/functions.sh@23 -- # ng2n3[nguid]=00000000000000000000000000000000 00:13:32.961 18:11:43 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:32.961 18:11:43 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:32.961 18:11:43 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0000000000000000 ]] 00:13:32.961 18:11:43 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n3[eui64]="0000000000000000"' 00:13:32.961 18:11:43 nvme_fdp -- nvme/functions.sh@23 -- # ng2n3[eui64]=0000000000000000 00:13:32.961 18:11:43 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:32.961 18:11:43 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:32.961 18:11:43 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:9 rp:0 ]] 00:13:32.961 18:11:43 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n3[lbaf0]="ms:0 lbads:9 rp:0 "' 00:13:32.961 18:11:43 nvme_fdp -- nvme/functions.sh@23 -- # ng2n3[lbaf0]='ms:0 lbads:9 rp:0 ' 00:13:32.961 18:11:43 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:32.961 18:11:43 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:32.961 18:11:43 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:9 rp:0 ]] 00:13:32.961 18:11:43 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n3[lbaf1]="ms:8 lbads:9 rp:0 "' 00:13:32.961 18:11:43 nvme_fdp -- nvme/functions.sh@23 -- # ng2n3[lbaf1]='ms:8 lbads:9 rp:0 ' 00:13:32.961 18:11:43 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:32.961 18:11:43 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:32.961 18:11:43 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:9 rp:0 ]] 00:13:32.961 18:11:43 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n3[lbaf2]="ms:16 lbads:9 rp:0 "' 00:13:32.961 18:11:43 nvme_fdp -- nvme/functions.sh@23 -- # ng2n3[lbaf2]='ms:16 lbads:9 rp:0 ' 00:13:32.961 18:11:43 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:32.961 18:11:43 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:32.961 18:11:43 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:9 rp:0 ]] 00:13:32.961 18:11:43 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n3[lbaf3]="ms:64 lbads:9 rp:0 "' 00:13:32.961 18:11:43 nvme_fdp -- nvme/functions.sh@23 -- # ng2n3[lbaf3]='ms:64 lbads:9 rp:0 ' 00:13:32.961 18:11:43 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:32.961 18:11:43 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:32.961 18:11:43 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:12 rp:0 (in use) ]] 00:13:32.961 18:11:43 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n3[lbaf4]="ms:0 lbads:12 rp:0 (in use)"' 00:13:32.961 18:11:43 nvme_fdp -- nvme/functions.sh@23 -- # ng2n3[lbaf4]='ms:0 lbads:12 rp:0 (in use)' 00:13:32.961 18:11:43 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:32.961 18:11:43 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:32.961 18:11:43 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:12 rp:0 ]] 00:13:32.961 18:11:43 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n3[lbaf5]="ms:8 lbads:12 rp:0 "' 00:13:32.961 18:11:43 nvme_fdp -- nvme/functions.sh@23 -- # ng2n3[lbaf5]='ms:8 lbads:12 rp:0 ' 00:13:32.961 18:11:43 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:32.961 18:11:43 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:32.961 18:11:43 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:12 rp:0 ]] 00:13:32.961 18:11:43 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n3[lbaf6]="ms:16 lbads:12 rp:0 "' 00:13:32.961 18:11:43 nvme_fdp -- nvme/functions.sh@23 -- # ng2n3[lbaf6]='ms:16 lbads:12 rp:0 ' 00:13:32.961 18:11:43 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:32.961 18:11:43 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:32.961 18:11:43 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:12 rp:0 ]] 00:13:32.961 18:11:43 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n3[lbaf7]="ms:64 lbads:12 rp:0 "' 00:13:32.961 18:11:43 nvme_fdp -- nvme/functions.sh@23 -- # ng2n3[lbaf7]='ms:64 lbads:12 rp:0 ' 00:13:32.961 18:11:43 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:32.961 18:11:43 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:32.961 18:11:43 nvme_fdp -- nvme/functions.sh@58 -- # _ctrl_ns[${ns##*n}]=ng2n3 00:13:32.961 18:11:43 nvme_fdp -- nvme/functions.sh@54 -- # for ns in "$ctrl/"@("ng${ctrl##*nvme}"|"${ctrl##*/}n")* 00:13:32.961 18:11:43 nvme_fdp -- nvme/functions.sh@55 -- # [[ -e /sys/class/nvme/nvme2/nvme2n1 ]] 00:13:32.961 18:11:43 nvme_fdp -- nvme/functions.sh@56 -- # ns_dev=nvme2n1 00:13:32.961 18:11:43 nvme_fdp -- nvme/functions.sh@57 -- # nvme_get nvme2n1 id-ns /dev/nvme2n1 00:13:32.961 18:11:43 nvme_fdp -- nvme/functions.sh@17 -- # local ref=nvme2n1 reg val 00:13:32.961 18:11:43 nvme_fdp -- nvme/functions.sh@18 -- # shift 00:13:32.961 18:11:43 nvme_fdp -- nvme/functions.sh@20 -- # local -gA 'nvme2n1=()' 00:13:32.961 18:11:43 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:32.961 18:11:43 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:32.961 18:11:43 nvme_fdp -- nvme/functions.sh@16 -- # /usr/local/src/nvme-cli/nvme id-ns /dev/nvme2n1 00:13:32.961 18:11:43 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n '' ]] 00:13:32.961 18:11:43 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:32.961 18:11:43 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:32.961 18:11:43 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:13:32.961 18:11:43 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[nsze]="0x100000"' 00:13:32.961 18:11:43 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[nsze]=0x100000 00:13:32.961 18:11:43 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:32.961 18:11:43 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:32.961 18:11:43 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:13:32.961 18:11:43 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[ncap]="0x100000"' 00:13:32.961 18:11:43 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[ncap]=0x100000 00:13:32.961 18:11:43 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:32.961 18:11:43 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:32.961 18:11:43 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:13:32.961 18:11:43 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[nuse]="0x100000"' 00:13:32.961 18:11:43 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[nuse]=0x100000 00:13:32.961 18:11:43 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:32.961 18:11:43 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:32.961 18:11:43 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x14 ]] 00:13:32.961 18:11:43 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[nsfeat]="0x14"' 00:13:32.961 18:11:43 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[nsfeat]=0x14 00:13:32.961 18:11:43 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:32.961 18:11:43 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:32.961 18:11:43 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 7 ]] 00:13:32.961 18:11:43 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[nlbaf]="7"' 00:13:32.961 18:11:43 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[nlbaf]=7 00:13:32.961 18:11:43 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:32.961 18:11:43 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:32.961 18:11:43 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x4 ]] 00:13:32.961 18:11:43 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[flbas]="0x4"' 00:13:32.961 18:11:43 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[flbas]=0x4 00:13:32.961 18:11:43 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:32.961 18:11:43 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:32.961 18:11:43 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:13:32.961 18:11:43 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[mc]="0x3"' 00:13:32.961 18:11:43 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[mc]=0x3 00:13:32.961 18:11:43 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:32.961 18:11:43 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:32.961 18:11:43 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x1f ]] 00:13:32.961 18:11:43 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[dpc]="0x1f"' 00:13:32.961 18:11:43 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[dpc]=0x1f 00:13:32.961 18:11:43 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:32.961 18:11:43 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:32.961 18:11:43 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:32.961 18:11:43 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[dps]="0"' 00:13:32.961 18:11:43 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[dps]=0 00:13:32.961 18:11:43 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:32.961 18:11:43 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:32.961 18:11:43 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:32.961 18:11:43 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[nmic]="0"' 00:13:32.961 18:11:43 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[nmic]=0 00:13:32.961 18:11:43 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:32.961 18:11:43 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:32.961 18:11:43 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:32.962 18:11:43 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[rescap]="0"' 00:13:32.962 18:11:43 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[rescap]=0 00:13:32.962 18:11:43 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:32.962 18:11:43 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:32.962 18:11:43 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:32.962 18:11:43 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[fpi]="0"' 00:13:32.962 18:11:43 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[fpi]=0 00:13:32.962 18:11:43 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:32.962 18:11:43 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:32.962 18:11:43 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:13:32.962 18:11:43 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[dlfeat]="1"' 00:13:32.962 18:11:43 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[dlfeat]=1 00:13:32.962 18:11:43 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:32.962 18:11:43 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:32.962 18:11:43 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:32.962 18:11:43 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[nawun]="0"' 00:13:32.962 18:11:43 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[nawun]=0 00:13:32.962 18:11:43 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:32.962 18:11:43 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:32.962 18:11:43 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:32.962 18:11:43 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[nawupf]="0"' 00:13:32.962 18:11:43 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[nawupf]=0 00:13:32.962 18:11:43 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:32.962 18:11:43 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:32.962 18:11:43 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:32.962 18:11:43 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[nacwu]="0"' 00:13:32.962 18:11:43 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[nacwu]=0 00:13:32.962 18:11:43 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:32.962 18:11:43 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:32.962 18:11:43 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:32.962 18:11:43 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[nabsn]="0"' 00:13:32.962 18:11:43 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[nabsn]=0 00:13:32.962 18:11:43 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:32.962 18:11:43 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:32.962 18:11:43 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:32.962 18:11:43 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[nabo]="0"' 00:13:32.962 18:11:43 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[nabo]=0 00:13:32.962 18:11:43 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:32.962 18:11:43 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:32.962 18:11:43 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:32.962 18:11:43 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[nabspf]="0"' 00:13:32.962 18:11:43 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[nabspf]=0 00:13:32.962 18:11:43 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:32.962 18:11:43 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:32.962 18:11:43 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:32.962 18:11:43 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[noiob]="0"' 00:13:32.962 18:11:43 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[noiob]=0 00:13:32.962 18:11:43 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:32.962 18:11:43 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:32.962 18:11:43 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:32.962 18:11:43 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[nvmcap]="0"' 00:13:32.962 18:11:43 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[nvmcap]=0 00:13:32.962 18:11:43 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:32.962 18:11:43 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:32.962 18:11:43 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:32.962 18:11:43 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[npwg]="0"' 00:13:32.962 18:11:43 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[npwg]=0 00:13:32.962 18:11:43 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:32.962 18:11:43 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:32.962 18:11:43 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:32.962 18:11:43 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[npwa]="0"' 00:13:32.962 18:11:43 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[npwa]=0 00:13:32.962 18:11:43 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:32.962 18:11:43 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:32.962 18:11:43 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:33.350 18:11:43 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[npdg]="0"' 00:13:33.350 18:11:43 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[npdg]=0 00:13:33.350 18:11:43 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:33.350 18:11:43 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:33.350 18:11:43 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:33.350 18:11:43 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[npda]="0"' 00:13:33.350 18:11:43 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[npda]=0 00:13:33.350 18:11:43 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:33.350 18:11:43 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:33.350 18:11:43 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:33.350 18:11:43 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[nows]="0"' 00:13:33.350 18:11:43 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[nows]=0 00:13:33.350 18:11:43 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:33.350 18:11:43 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:33.350 18:11:43 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:13:33.350 18:11:43 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[mssrl]="128"' 00:13:33.350 18:11:43 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[mssrl]=128 00:13:33.350 18:11:43 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:33.350 18:11:43 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:33.350 18:11:43 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:13:33.350 18:11:43 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[mcl]="128"' 00:13:33.350 18:11:43 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[mcl]=128 00:13:33.350 18:11:43 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:33.350 18:11:43 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:33.350 18:11:43 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 127 ]] 00:13:33.350 18:11:43 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[msrc]="127"' 00:13:33.350 18:11:43 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[msrc]=127 00:13:33.350 18:11:43 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:33.350 18:11:43 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:33.350 18:11:43 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:33.350 18:11:43 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[nulbaf]="0"' 00:13:33.350 18:11:43 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[nulbaf]=0 00:13:33.350 18:11:43 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:33.350 18:11:43 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:33.350 18:11:43 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:33.350 18:11:43 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[anagrpid]="0"' 00:13:33.350 18:11:43 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[anagrpid]=0 00:13:33.350 18:11:43 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:33.350 18:11:43 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:33.350 18:11:43 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:33.350 18:11:43 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[nsattr]="0"' 00:13:33.350 18:11:43 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[nsattr]=0 00:13:33.350 18:11:43 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:33.350 18:11:43 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:33.350 18:11:43 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:33.350 18:11:43 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[nvmsetid]="0"' 00:13:33.350 18:11:43 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[nvmsetid]=0 00:13:33.350 18:11:43 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:33.350 18:11:43 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:33.350 18:11:43 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:33.350 18:11:43 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[endgid]="0"' 00:13:33.350 18:11:43 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[endgid]=0 00:13:33.350 18:11:43 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:33.350 18:11:43 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:33.350 18:11:43 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 00000000000000000000000000000000 ]] 00:13:33.350 18:11:43 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[nguid]="00000000000000000000000000000000"' 00:13:33.350 18:11:43 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[nguid]=00000000000000000000000000000000 00:13:33.350 18:11:43 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:33.350 18:11:43 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:33.350 18:11:43 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0000000000000000 ]] 00:13:33.350 18:11:43 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[eui64]="0000000000000000"' 00:13:33.350 18:11:43 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[eui64]=0000000000000000 00:13:33.350 18:11:43 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:33.350 18:11:43 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:33.350 18:11:43 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:9 rp:0 ]] 00:13:33.350 18:11:43 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[lbaf0]="ms:0 lbads:9 rp:0 "' 00:13:33.350 18:11:43 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[lbaf0]='ms:0 lbads:9 rp:0 ' 00:13:33.350 18:11:43 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:33.350 18:11:43 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:33.350 18:11:43 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:9 rp:0 ]] 00:13:33.350 18:11:43 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[lbaf1]="ms:8 lbads:9 rp:0 "' 00:13:33.350 18:11:43 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[lbaf1]='ms:8 lbads:9 rp:0 ' 00:13:33.350 18:11:43 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:33.350 18:11:43 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:33.350 18:11:43 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:9 rp:0 ]] 00:13:33.351 18:11:43 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[lbaf2]="ms:16 lbads:9 rp:0 "' 00:13:33.351 18:11:43 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[lbaf2]='ms:16 lbads:9 rp:0 ' 00:13:33.351 18:11:43 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:33.351 18:11:43 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:33.351 18:11:43 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:9 rp:0 ]] 00:13:33.351 18:11:43 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[lbaf3]="ms:64 lbads:9 rp:0 "' 00:13:33.351 18:11:43 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[lbaf3]='ms:64 lbads:9 rp:0 ' 00:13:33.351 18:11:43 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:33.351 18:11:43 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:33.351 18:11:43 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:12 rp:0 (in use) ]] 00:13:33.351 18:11:43 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[lbaf4]="ms:0 lbads:12 rp:0 (in use)"' 00:13:33.351 18:11:43 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[lbaf4]='ms:0 lbads:12 rp:0 (in use)' 00:13:33.351 18:11:43 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:33.351 18:11:43 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:33.351 18:11:43 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:12 rp:0 ]] 00:13:33.351 18:11:43 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[lbaf5]="ms:8 lbads:12 rp:0 "' 00:13:33.351 18:11:43 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[lbaf5]='ms:8 lbads:12 rp:0 ' 00:13:33.351 18:11:43 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:33.351 18:11:43 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:33.351 18:11:43 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:12 rp:0 ]] 00:13:33.351 18:11:43 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[lbaf6]="ms:16 lbads:12 rp:0 "' 00:13:33.351 18:11:43 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[lbaf6]='ms:16 lbads:12 rp:0 ' 00:13:33.351 18:11:43 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:33.351 18:11:43 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:33.351 18:11:43 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:12 rp:0 ]] 00:13:33.351 18:11:43 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[lbaf7]="ms:64 lbads:12 rp:0 "' 00:13:33.351 18:11:43 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[lbaf7]='ms:64 lbads:12 rp:0 ' 00:13:33.351 18:11:43 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:33.351 18:11:43 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:33.351 18:11:43 nvme_fdp -- nvme/functions.sh@58 -- # _ctrl_ns[${ns##*n}]=nvme2n1 00:13:33.351 18:11:43 nvme_fdp -- nvme/functions.sh@54 -- # for ns in "$ctrl/"@("ng${ctrl##*nvme}"|"${ctrl##*/}n")* 00:13:33.351 18:11:43 nvme_fdp -- nvme/functions.sh@55 -- # [[ -e /sys/class/nvme/nvme2/nvme2n2 ]] 00:13:33.351 18:11:43 nvme_fdp -- nvme/functions.sh@56 -- # ns_dev=nvme2n2 00:13:33.351 18:11:43 nvme_fdp -- nvme/functions.sh@57 -- # nvme_get nvme2n2 id-ns /dev/nvme2n2 00:13:33.351 18:11:43 nvme_fdp -- nvme/functions.sh@17 -- # local ref=nvme2n2 reg val 00:13:33.351 18:11:43 nvme_fdp -- nvme/functions.sh@18 -- # shift 00:13:33.351 18:11:43 nvme_fdp -- nvme/functions.sh@20 -- # local -gA 'nvme2n2=()' 00:13:33.351 18:11:43 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:33.351 18:11:43 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:33.351 18:11:43 nvme_fdp -- nvme/functions.sh@16 -- # /usr/local/src/nvme-cli/nvme id-ns /dev/nvme2n2 00:13:33.351 18:11:43 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n '' ]] 00:13:33.351 18:11:43 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:33.351 18:11:43 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:33.351 18:11:43 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:13:33.351 18:11:43 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[nsze]="0x100000"' 00:13:33.351 18:11:43 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[nsze]=0x100000 00:13:33.351 18:11:43 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:33.351 18:11:43 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:33.351 18:11:43 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:13:33.351 18:11:43 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[ncap]="0x100000"' 00:13:33.351 18:11:43 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[ncap]=0x100000 00:13:33.351 18:11:43 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:33.351 18:11:43 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:33.351 18:11:43 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:13:33.351 18:11:43 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[nuse]="0x100000"' 00:13:33.351 18:11:43 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[nuse]=0x100000 00:13:33.351 18:11:43 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:33.351 18:11:43 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:33.351 18:11:43 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x14 ]] 00:13:33.351 18:11:43 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[nsfeat]="0x14"' 00:13:33.351 18:11:43 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[nsfeat]=0x14 00:13:33.351 18:11:43 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:33.351 18:11:43 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:33.351 18:11:43 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 7 ]] 00:13:33.351 18:11:43 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[nlbaf]="7"' 00:13:33.351 18:11:43 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[nlbaf]=7 00:13:33.351 18:11:43 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:33.351 18:11:43 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:33.351 18:11:43 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x4 ]] 00:13:33.351 18:11:43 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[flbas]="0x4"' 00:13:33.351 18:11:43 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[flbas]=0x4 00:13:33.351 18:11:43 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:33.351 18:11:43 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:33.351 18:11:43 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:13:33.351 18:11:43 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[mc]="0x3"' 00:13:33.351 18:11:43 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[mc]=0x3 00:13:33.351 18:11:43 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:33.351 18:11:43 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:33.351 18:11:43 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x1f ]] 00:13:33.351 18:11:43 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[dpc]="0x1f"' 00:13:33.351 18:11:43 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[dpc]=0x1f 00:13:33.351 18:11:43 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:33.351 18:11:43 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:33.351 18:11:43 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:33.351 18:11:43 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[dps]="0"' 00:13:33.351 18:11:43 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[dps]=0 00:13:33.351 18:11:43 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:33.351 18:11:43 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:33.351 18:11:43 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:33.351 18:11:43 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[nmic]="0"' 00:13:33.351 18:11:43 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[nmic]=0 00:13:33.351 18:11:43 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:33.351 18:11:43 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:33.351 18:11:43 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:33.351 18:11:43 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[rescap]="0"' 00:13:33.351 18:11:43 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[rescap]=0 00:13:33.351 18:11:43 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:33.351 18:11:43 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:33.351 18:11:43 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:33.351 18:11:43 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[fpi]="0"' 00:13:33.351 18:11:43 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[fpi]=0 00:13:33.351 18:11:43 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:33.351 18:11:43 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:33.351 18:11:43 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:13:33.351 18:11:43 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[dlfeat]="1"' 00:13:33.351 18:11:43 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[dlfeat]=1 00:13:33.351 18:11:43 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:33.351 18:11:43 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:33.351 18:11:43 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:33.351 18:11:43 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[nawun]="0"' 00:13:33.351 18:11:43 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[nawun]=0 00:13:33.351 18:11:43 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:33.351 18:11:43 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:33.351 18:11:43 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:33.351 18:11:43 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[nawupf]="0"' 00:13:33.351 18:11:43 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[nawupf]=0 00:13:33.351 18:11:43 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:33.351 18:11:43 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:33.351 18:11:43 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:33.351 18:11:43 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[nacwu]="0"' 00:13:33.351 18:11:43 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[nacwu]=0 00:13:33.351 18:11:43 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:33.351 18:11:43 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:33.351 18:11:43 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:33.351 18:11:43 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[nabsn]="0"' 00:13:33.351 18:11:43 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[nabsn]=0 00:13:33.351 18:11:43 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:33.351 18:11:43 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:33.351 18:11:43 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:33.351 18:11:43 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[nabo]="0"' 00:13:33.351 18:11:43 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[nabo]=0 00:13:33.351 18:11:43 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:33.351 18:11:43 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:33.351 18:11:43 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:33.351 18:11:43 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[nabspf]="0"' 00:13:33.351 18:11:43 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[nabspf]=0 00:13:33.351 18:11:43 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:33.351 18:11:43 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:33.351 18:11:43 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:33.351 18:11:43 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[noiob]="0"' 00:13:33.351 18:11:43 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[noiob]=0 00:13:33.351 18:11:43 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:33.351 18:11:43 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:33.351 18:11:43 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:33.351 18:11:43 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[nvmcap]="0"' 00:13:33.351 18:11:43 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[nvmcap]=0 00:13:33.351 18:11:43 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:33.351 18:11:43 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:33.351 18:11:43 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:33.352 18:11:43 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[npwg]="0"' 00:13:33.352 18:11:43 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[npwg]=0 00:13:33.352 18:11:43 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:33.352 18:11:43 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:33.352 18:11:43 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:33.352 18:11:43 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[npwa]="0"' 00:13:33.352 18:11:43 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[npwa]=0 00:13:33.352 18:11:43 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:33.352 18:11:43 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:33.352 18:11:43 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:33.352 18:11:43 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[npdg]="0"' 00:13:33.352 18:11:43 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[npdg]=0 00:13:33.352 18:11:43 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:33.352 18:11:43 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:33.352 18:11:43 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:33.352 18:11:43 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[npda]="0"' 00:13:33.352 18:11:43 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[npda]=0 00:13:33.352 18:11:43 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:33.352 18:11:43 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:33.352 18:11:43 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:33.352 18:11:43 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[nows]="0"' 00:13:33.352 18:11:43 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[nows]=0 00:13:33.352 18:11:43 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:33.352 18:11:43 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:33.352 18:11:43 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:13:33.352 18:11:43 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[mssrl]="128"' 00:13:33.352 18:11:43 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[mssrl]=128 00:13:33.352 18:11:43 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:33.352 18:11:43 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:33.352 18:11:43 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:13:33.352 18:11:43 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[mcl]="128"' 00:13:33.352 18:11:43 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[mcl]=128 00:13:33.352 18:11:43 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:33.352 18:11:43 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:33.352 18:11:43 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 127 ]] 00:13:33.352 18:11:43 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[msrc]="127"' 00:13:33.352 18:11:43 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[msrc]=127 00:13:33.352 18:11:43 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:33.352 18:11:43 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:33.352 18:11:43 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:33.352 18:11:43 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[nulbaf]="0"' 00:13:33.352 18:11:43 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[nulbaf]=0 00:13:33.352 18:11:43 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:33.352 18:11:43 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:33.352 18:11:43 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:33.352 18:11:43 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[anagrpid]="0"' 00:13:33.352 18:11:43 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[anagrpid]=0 00:13:33.352 18:11:43 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:33.352 18:11:43 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:33.352 18:11:43 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:33.352 18:11:43 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[nsattr]="0"' 00:13:33.352 18:11:43 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[nsattr]=0 00:13:33.352 18:11:43 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:33.352 18:11:43 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:33.352 18:11:43 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:33.352 18:11:43 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[nvmsetid]="0"' 00:13:33.352 18:11:43 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[nvmsetid]=0 00:13:33.352 18:11:43 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:33.352 18:11:43 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:33.352 18:11:43 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:33.352 18:11:43 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[endgid]="0"' 00:13:33.352 18:11:43 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[endgid]=0 00:13:33.352 18:11:43 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:33.352 18:11:43 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:33.352 18:11:43 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 00000000000000000000000000000000 ]] 00:13:33.352 18:11:43 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[nguid]="00000000000000000000000000000000"' 00:13:33.352 18:11:43 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[nguid]=00000000000000000000000000000000 00:13:33.352 18:11:43 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:33.352 18:11:43 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:33.352 18:11:43 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0000000000000000 ]] 00:13:33.352 18:11:43 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[eui64]="0000000000000000"' 00:13:33.352 18:11:43 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[eui64]=0000000000000000 00:13:33.352 18:11:43 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:33.352 18:11:43 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:33.352 18:11:43 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:9 rp:0 ]] 00:13:33.352 18:11:43 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[lbaf0]="ms:0 lbads:9 rp:0 "' 00:13:33.352 18:11:43 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[lbaf0]='ms:0 lbads:9 rp:0 ' 00:13:33.352 18:11:43 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:33.352 18:11:43 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:33.352 18:11:43 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:9 rp:0 ]] 00:13:33.352 18:11:43 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[lbaf1]="ms:8 lbads:9 rp:0 "' 00:13:33.352 18:11:43 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[lbaf1]='ms:8 lbads:9 rp:0 ' 00:13:33.352 18:11:43 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:33.352 18:11:43 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:33.352 18:11:43 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:9 rp:0 ]] 00:13:33.352 18:11:43 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[lbaf2]="ms:16 lbads:9 rp:0 "' 00:13:33.352 18:11:43 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[lbaf2]='ms:16 lbads:9 rp:0 ' 00:13:33.352 18:11:43 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:33.352 18:11:43 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:33.352 18:11:43 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:9 rp:0 ]] 00:13:33.352 18:11:43 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[lbaf3]="ms:64 lbads:9 rp:0 "' 00:13:33.352 18:11:43 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[lbaf3]='ms:64 lbads:9 rp:0 ' 00:13:33.352 18:11:43 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:33.352 18:11:43 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:33.352 18:11:43 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:12 rp:0 (in use) ]] 00:13:33.352 18:11:43 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[lbaf4]="ms:0 lbads:12 rp:0 (in use)"' 00:13:33.352 18:11:43 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[lbaf4]='ms:0 lbads:12 rp:0 (in use)' 00:13:33.352 18:11:43 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:33.352 18:11:43 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:33.352 18:11:43 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:12 rp:0 ]] 00:13:33.352 18:11:43 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[lbaf5]="ms:8 lbads:12 rp:0 "' 00:13:33.352 18:11:43 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[lbaf5]='ms:8 lbads:12 rp:0 ' 00:13:33.352 18:11:43 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:33.352 18:11:43 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:33.352 18:11:43 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:12 rp:0 ]] 00:13:33.352 18:11:43 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[lbaf6]="ms:16 lbads:12 rp:0 "' 00:13:33.352 18:11:43 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[lbaf6]='ms:16 lbads:12 rp:0 ' 00:13:33.352 18:11:43 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:33.352 18:11:43 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:33.352 18:11:43 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:12 rp:0 ]] 00:13:33.352 18:11:43 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[lbaf7]="ms:64 lbads:12 rp:0 "' 00:13:33.352 18:11:43 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[lbaf7]='ms:64 lbads:12 rp:0 ' 00:13:33.352 18:11:43 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:33.352 18:11:43 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:33.352 18:11:43 nvme_fdp -- nvme/functions.sh@58 -- # _ctrl_ns[${ns##*n}]=nvme2n2 00:13:33.352 18:11:43 nvme_fdp -- nvme/functions.sh@54 -- # for ns in "$ctrl/"@("ng${ctrl##*nvme}"|"${ctrl##*/}n")* 00:13:33.352 18:11:43 nvme_fdp -- nvme/functions.sh@55 -- # [[ -e /sys/class/nvme/nvme2/nvme2n3 ]] 00:13:33.352 18:11:43 nvme_fdp -- nvme/functions.sh@56 -- # ns_dev=nvme2n3 00:13:33.352 18:11:43 nvme_fdp -- nvme/functions.sh@57 -- # nvme_get nvme2n3 id-ns /dev/nvme2n3 00:13:33.352 18:11:43 nvme_fdp -- nvme/functions.sh@17 -- # local ref=nvme2n3 reg val 00:13:33.352 18:11:43 nvme_fdp -- nvme/functions.sh@18 -- # shift 00:13:33.352 18:11:43 nvme_fdp -- nvme/functions.sh@20 -- # local -gA 'nvme2n3=()' 00:13:33.352 18:11:43 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:33.352 18:11:43 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:33.352 18:11:43 nvme_fdp -- nvme/functions.sh@16 -- # /usr/local/src/nvme-cli/nvme id-ns /dev/nvme2n3 00:13:33.352 18:11:43 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n '' ]] 00:13:33.352 18:11:43 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:33.352 18:11:43 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:33.352 18:11:43 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:13:33.352 18:11:43 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[nsze]="0x100000"' 00:13:33.352 18:11:43 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[nsze]=0x100000 00:13:33.352 18:11:43 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:33.352 18:11:43 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:33.352 18:11:43 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:13:33.352 18:11:43 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[ncap]="0x100000"' 00:13:33.352 18:11:43 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[ncap]=0x100000 00:13:33.352 18:11:43 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:33.352 18:11:43 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:33.352 18:11:43 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:13:33.352 18:11:43 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[nuse]="0x100000"' 00:13:33.352 18:11:43 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[nuse]=0x100000 00:13:33.352 18:11:43 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:33.352 18:11:43 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:33.352 18:11:43 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x14 ]] 00:13:33.352 18:11:43 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[nsfeat]="0x14"' 00:13:33.352 18:11:43 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[nsfeat]=0x14 00:13:33.352 18:11:43 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:33.352 18:11:43 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:33.353 18:11:43 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 7 ]] 00:13:33.353 18:11:43 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[nlbaf]="7"' 00:13:33.353 18:11:43 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[nlbaf]=7 00:13:33.353 18:11:43 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:33.353 18:11:43 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:33.353 18:11:43 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x4 ]] 00:13:33.353 18:11:43 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[flbas]="0x4"' 00:13:33.353 18:11:43 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[flbas]=0x4 00:13:33.353 18:11:43 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:33.353 18:11:43 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:33.353 18:11:43 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:13:33.353 18:11:43 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[mc]="0x3"' 00:13:33.353 18:11:43 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[mc]=0x3 00:13:33.353 18:11:43 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:33.353 18:11:43 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:33.353 18:11:43 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x1f ]] 00:13:33.353 18:11:43 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[dpc]="0x1f"' 00:13:33.353 18:11:43 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[dpc]=0x1f 00:13:33.353 18:11:43 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:33.353 18:11:43 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:33.353 18:11:43 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:33.353 18:11:43 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[dps]="0"' 00:13:33.353 18:11:43 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[dps]=0 00:13:33.353 18:11:43 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:33.353 18:11:43 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:33.353 18:11:43 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:33.353 18:11:43 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[nmic]="0"' 00:13:33.353 18:11:43 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[nmic]=0 00:13:33.353 18:11:43 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:33.353 18:11:43 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:33.353 18:11:43 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:33.353 18:11:43 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[rescap]="0"' 00:13:33.353 18:11:43 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[rescap]=0 00:13:33.353 18:11:43 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:33.353 18:11:43 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:33.353 18:11:43 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:33.353 18:11:43 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[fpi]="0"' 00:13:33.353 18:11:43 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[fpi]=0 00:13:33.353 18:11:43 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:33.353 18:11:43 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:33.353 18:11:43 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:13:33.353 18:11:43 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[dlfeat]="1"' 00:13:33.353 18:11:43 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[dlfeat]=1 00:13:33.353 18:11:43 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:33.353 18:11:43 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:33.353 18:11:43 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:33.353 18:11:43 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[nawun]="0"' 00:13:33.353 18:11:43 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[nawun]=0 00:13:33.353 18:11:43 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:33.353 18:11:43 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:33.353 18:11:43 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:33.353 18:11:43 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[nawupf]="0"' 00:13:33.353 18:11:43 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[nawupf]=0 00:13:33.353 18:11:43 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:33.353 18:11:43 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:33.353 18:11:43 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:33.353 18:11:43 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[nacwu]="0"' 00:13:33.353 18:11:43 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[nacwu]=0 00:13:33.353 18:11:43 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:33.353 18:11:43 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:33.353 18:11:43 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:33.353 18:11:43 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[nabsn]="0"' 00:13:33.353 18:11:43 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[nabsn]=0 00:13:33.353 18:11:43 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:33.353 18:11:43 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:33.353 18:11:43 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:33.353 18:11:43 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[nabo]="0"' 00:13:33.353 18:11:43 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[nabo]=0 00:13:33.353 18:11:43 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:33.353 18:11:43 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:33.353 18:11:43 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:33.353 18:11:43 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[nabspf]="0"' 00:13:33.353 18:11:43 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[nabspf]=0 00:13:33.353 18:11:43 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:33.353 18:11:43 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:33.353 18:11:43 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:33.353 18:11:43 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[noiob]="0"' 00:13:33.353 18:11:43 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[noiob]=0 00:13:33.353 18:11:43 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:33.353 18:11:43 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:33.353 18:11:43 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:33.353 18:11:43 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[nvmcap]="0"' 00:13:33.353 18:11:43 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[nvmcap]=0 00:13:33.353 18:11:43 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:33.353 18:11:43 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:33.353 18:11:43 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:33.353 18:11:43 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[npwg]="0"' 00:13:33.353 18:11:43 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[npwg]=0 00:13:33.353 18:11:43 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:33.353 18:11:43 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:33.353 18:11:43 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:33.353 18:11:43 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[npwa]="0"' 00:13:33.353 18:11:43 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[npwa]=0 00:13:33.353 18:11:43 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:33.353 18:11:43 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:33.353 18:11:43 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:33.353 18:11:43 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[npdg]="0"' 00:13:33.353 18:11:43 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[npdg]=0 00:13:33.353 18:11:43 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:33.353 18:11:43 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:33.353 18:11:43 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:33.353 18:11:43 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[npda]="0"' 00:13:33.353 18:11:43 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[npda]=0 00:13:33.353 18:11:43 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:33.353 18:11:43 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:33.353 18:11:43 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:33.353 18:11:43 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[nows]="0"' 00:13:33.353 18:11:43 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[nows]=0 00:13:33.353 18:11:43 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:33.353 18:11:43 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:33.353 18:11:43 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:13:33.353 18:11:43 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[mssrl]="128"' 00:13:33.353 18:11:43 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[mssrl]=128 00:13:33.353 18:11:43 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:33.353 18:11:43 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:33.353 18:11:43 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:13:33.353 18:11:43 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[mcl]="128"' 00:13:33.353 18:11:43 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[mcl]=128 00:13:33.353 18:11:43 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:33.353 18:11:43 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:33.353 18:11:43 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 127 ]] 00:13:33.353 18:11:43 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[msrc]="127"' 00:13:33.353 18:11:43 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[msrc]=127 00:13:33.353 18:11:43 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:33.353 18:11:43 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:33.353 18:11:43 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:33.353 18:11:43 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[nulbaf]="0"' 00:13:33.353 18:11:43 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[nulbaf]=0 00:13:33.353 18:11:43 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:33.353 18:11:43 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:33.353 18:11:43 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:33.353 18:11:43 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[anagrpid]="0"' 00:13:33.353 18:11:43 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[anagrpid]=0 00:13:33.353 18:11:43 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:33.353 18:11:43 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:33.353 18:11:43 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:33.353 18:11:43 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[nsattr]="0"' 00:13:33.353 18:11:43 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[nsattr]=0 00:13:33.353 18:11:43 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:33.353 18:11:43 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:33.353 18:11:43 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:33.353 18:11:43 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[nvmsetid]="0"' 00:13:33.353 18:11:43 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[nvmsetid]=0 00:13:33.353 18:11:43 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:33.353 18:11:43 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:33.353 18:11:43 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:33.353 18:11:43 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[endgid]="0"' 00:13:33.353 18:11:43 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[endgid]=0 00:13:33.353 18:11:43 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:33.353 18:11:43 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:33.353 18:11:43 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 00000000000000000000000000000000 ]] 00:13:33.353 18:11:43 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[nguid]="00000000000000000000000000000000"' 00:13:33.353 18:11:43 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[nguid]=00000000000000000000000000000000 00:13:33.353 18:11:43 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:33.353 18:11:43 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:33.353 18:11:43 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0000000000000000 ]] 00:13:33.354 18:11:43 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[eui64]="0000000000000000"' 00:13:33.354 18:11:43 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[eui64]=0000000000000000 00:13:33.354 18:11:43 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:33.354 18:11:43 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:33.354 18:11:43 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:9 rp:0 ]] 00:13:33.354 18:11:43 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[lbaf0]="ms:0 lbads:9 rp:0 "' 00:13:33.354 18:11:43 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[lbaf0]='ms:0 lbads:9 rp:0 ' 00:13:33.354 18:11:43 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:33.354 18:11:43 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:33.354 18:11:43 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:9 rp:0 ]] 00:13:33.354 18:11:43 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[lbaf1]="ms:8 lbads:9 rp:0 "' 00:13:33.354 18:11:43 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[lbaf1]='ms:8 lbads:9 rp:0 ' 00:13:33.354 18:11:43 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:33.354 18:11:43 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:33.354 18:11:43 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:9 rp:0 ]] 00:13:33.354 18:11:43 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[lbaf2]="ms:16 lbads:9 rp:0 "' 00:13:33.354 18:11:43 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[lbaf2]='ms:16 lbads:9 rp:0 ' 00:13:33.354 18:11:43 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:33.354 18:11:43 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:33.354 18:11:43 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:9 rp:0 ]] 00:13:33.354 18:11:43 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[lbaf3]="ms:64 lbads:9 rp:0 "' 00:13:33.354 18:11:43 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[lbaf3]='ms:64 lbads:9 rp:0 ' 00:13:33.354 18:11:43 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:33.354 18:11:43 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:33.354 18:11:43 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:12 rp:0 (in use) ]] 00:13:33.354 18:11:43 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[lbaf4]="ms:0 lbads:12 rp:0 (in use)"' 00:13:33.354 18:11:43 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[lbaf4]='ms:0 lbads:12 rp:0 (in use)' 00:13:33.354 18:11:43 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:33.354 18:11:43 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:33.354 18:11:43 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:12 rp:0 ]] 00:13:33.354 18:11:43 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[lbaf5]="ms:8 lbads:12 rp:0 "' 00:13:33.354 18:11:43 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[lbaf5]='ms:8 lbads:12 rp:0 ' 00:13:33.354 18:11:43 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:33.354 18:11:43 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:33.354 18:11:43 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:12 rp:0 ]] 00:13:33.354 18:11:43 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[lbaf6]="ms:16 lbads:12 rp:0 "' 00:13:33.354 18:11:43 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[lbaf6]='ms:16 lbads:12 rp:0 ' 00:13:33.354 18:11:43 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:33.354 18:11:43 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:33.354 18:11:43 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:12 rp:0 ]] 00:13:33.354 18:11:43 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[lbaf7]="ms:64 lbads:12 rp:0 "' 00:13:33.354 18:11:43 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[lbaf7]='ms:64 lbads:12 rp:0 ' 00:13:33.354 18:11:43 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:33.354 18:11:43 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:33.354 18:11:43 nvme_fdp -- nvme/functions.sh@58 -- # _ctrl_ns[${ns##*n}]=nvme2n3 00:13:33.354 18:11:43 nvme_fdp -- nvme/functions.sh@60 -- # ctrls["$ctrl_dev"]=nvme2 00:13:33.354 18:11:43 nvme_fdp -- nvme/functions.sh@61 -- # nvmes["$ctrl_dev"]=nvme2_ns 00:13:33.354 18:11:43 nvme_fdp -- nvme/functions.sh@62 -- # bdfs["$ctrl_dev"]=0000:00:12.0 00:13:33.354 18:11:43 nvme_fdp -- nvme/functions.sh@63 -- # ordered_ctrls[${ctrl_dev/nvme/}]=nvme2 00:13:33.354 18:11:43 nvme_fdp -- nvme/functions.sh@47 -- # for ctrl in /sys/class/nvme/nvme* 00:13:33.354 18:11:43 nvme_fdp -- nvme/functions.sh@48 -- # [[ -e /sys/class/nvme/nvme3 ]] 00:13:33.354 18:11:43 nvme_fdp -- nvme/functions.sh@49 -- # pci=0000:00:13.0 00:13:33.354 18:11:43 nvme_fdp -- nvme/functions.sh@50 -- # pci_can_use 0000:00:13.0 00:13:33.354 18:11:43 nvme_fdp -- scripts/common.sh@18 -- # local i 00:13:33.354 18:11:43 nvme_fdp -- scripts/common.sh@21 -- # [[ =~ 0000:00:13.0 ]] 00:13:33.354 18:11:43 nvme_fdp -- scripts/common.sh@25 -- # [[ -z '' ]] 00:13:33.354 18:11:43 nvme_fdp -- scripts/common.sh@27 -- # return 0 00:13:33.354 18:11:43 nvme_fdp -- nvme/functions.sh@51 -- # ctrl_dev=nvme3 00:13:33.354 18:11:43 nvme_fdp -- nvme/functions.sh@52 -- # nvme_get nvme3 id-ctrl /dev/nvme3 00:13:33.354 18:11:43 nvme_fdp -- nvme/functions.sh@17 -- # local ref=nvme3 reg val 00:13:33.354 18:11:43 nvme_fdp -- nvme/functions.sh@18 -- # shift 00:13:33.354 18:11:43 nvme_fdp -- nvme/functions.sh@20 -- # local -gA 'nvme3=()' 00:13:33.354 18:11:43 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:33.354 18:11:43 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:33.354 18:11:43 nvme_fdp -- nvme/functions.sh@16 -- # /usr/local/src/nvme-cli/nvme id-ctrl /dev/nvme3 00:13:33.354 18:11:43 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n '' ]] 00:13:33.354 18:11:43 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:33.354 18:11:43 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:33.354 18:11:43 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x1b36 ]] 00:13:33.354 18:11:43 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[vid]="0x1b36"' 00:13:33.354 18:11:43 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[vid]=0x1b36 00:13:33.354 18:11:43 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:33.354 18:11:43 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:33.354 18:11:43 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x1af4 ]] 00:13:33.354 18:11:43 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[ssvid]="0x1af4"' 00:13:33.354 18:11:43 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[ssvid]=0x1af4 00:13:33.354 18:11:43 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:33.354 18:11:43 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:33.354 18:11:43 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 12343 ]] 00:13:33.354 18:11:43 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[sn]="12343 "' 00:13:33.354 18:11:43 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[sn]='12343 ' 00:13:33.354 18:11:43 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:33.354 18:11:43 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:33.354 18:11:43 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n QEMU NVMe Ctrl ]] 00:13:33.354 18:11:43 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[mn]="QEMU NVMe Ctrl "' 00:13:33.354 18:11:43 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[mn]='QEMU NVMe Ctrl ' 00:13:33.354 18:11:43 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:33.354 18:11:43 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:33.354 18:11:43 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 8.0.0 ]] 00:13:33.354 18:11:43 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[fr]="8.0.0 "' 00:13:33.354 18:11:43 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[fr]='8.0.0 ' 00:13:33.354 18:11:43 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:33.354 18:11:43 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:33.354 18:11:43 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 6 ]] 00:13:33.354 18:11:43 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[rab]="6"' 00:13:33.354 18:11:43 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[rab]=6 00:13:33.354 18:11:43 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:33.354 18:11:43 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:33.354 18:11:43 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 525400 ]] 00:13:33.354 18:11:43 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[ieee]="525400"' 00:13:33.354 18:11:43 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[ieee]=525400 00:13:33.354 18:11:43 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:33.354 18:11:43 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:33.354 18:11:43 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x2 ]] 00:13:33.354 18:11:43 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[cmic]="0x2"' 00:13:33.354 18:11:43 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[cmic]=0x2 00:13:33.354 18:11:43 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:33.354 18:11:43 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:33.354 18:11:43 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 7 ]] 00:13:33.354 18:11:43 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[mdts]="7"' 00:13:33.354 18:11:43 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[mdts]=7 00:13:33.354 18:11:43 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:33.354 18:11:43 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:33.354 18:11:43 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:33.354 18:11:43 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[cntlid]="0"' 00:13:33.354 18:11:43 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[cntlid]=0 00:13:33.354 18:11:43 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:33.354 18:11:43 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:33.354 18:11:43 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x10400 ]] 00:13:33.354 18:11:43 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[ver]="0x10400"' 00:13:33.354 18:11:43 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[ver]=0x10400 00:13:33.354 18:11:43 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:33.354 18:11:43 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:33.354 18:11:43 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:33.354 18:11:43 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[rtd3r]="0"' 00:13:33.354 18:11:43 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[rtd3r]=0 00:13:33.354 18:11:43 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:33.354 18:11:43 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:33.354 18:11:43 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:33.354 18:11:43 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[rtd3e]="0"' 00:13:33.354 18:11:43 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[rtd3e]=0 00:13:33.354 18:11:43 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:33.354 18:11:43 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:33.354 18:11:43 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x100 ]] 00:13:33.354 18:11:43 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[oaes]="0x100"' 00:13:33.354 18:11:43 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[oaes]=0x100 00:13:33.354 18:11:43 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:33.354 18:11:43 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:33.354 18:11:43 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x88010 ]] 00:13:33.354 18:11:43 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[ctratt]="0x88010"' 00:13:33.354 18:11:43 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[ctratt]=0x88010 00:13:33.354 18:11:43 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:33.354 18:11:43 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:33.354 18:11:43 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:33.354 18:11:43 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[rrls]="0"' 00:13:33.354 18:11:43 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[rrls]=0 00:13:33.354 18:11:43 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:33.354 18:11:43 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:33.355 18:11:43 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:13:33.355 18:11:43 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[cntrltype]="1"' 00:13:33.355 18:11:43 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[cntrltype]=1 00:13:33.355 18:11:43 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:33.355 18:11:43 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:33.355 18:11:43 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 00000000-0000-0000-0000-000000000000 ]] 00:13:33.355 18:11:43 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[fguid]="00000000-0000-0000-0000-000000000000"' 00:13:33.355 18:11:43 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[fguid]=00000000-0000-0000-0000-000000000000 00:13:33.355 18:11:43 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:33.355 18:11:43 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:33.355 18:11:43 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:33.355 18:11:43 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[crdt1]="0"' 00:13:33.355 18:11:43 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[crdt1]=0 00:13:33.355 18:11:43 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:33.355 18:11:43 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:33.355 18:11:43 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:33.355 18:11:43 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[crdt2]="0"' 00:13:33.355 18:11:43 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[crdt2]=0 00:13:33.355 18:11:43 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:33.355 18:11:43 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:33.355 18:11:43 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:33.355 18:11:43 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[crdt3]="0"' 00:13:33.355 18:11:43 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[crdt3]=0 00:13:33.355 18:11:43 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:33.355 18:11:43 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:33.355 18:11:43 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:33.355 18:11:43 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[nvmsr]="0"' 00:13:33.355 18:11:43 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[nvmsr]=0 00:13:33.355 18:11:43 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:33.355 18:11:43 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:33.355 18:11:43 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:33.355 18:11:43 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[vwci]="0"' 00:13:33.355 18:11:43 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[vwci]=0 00:13:33.355 18:11:43 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:33.355 18:11:43 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:33.355 18:11:43 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:33.355 18:11:43 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[mec]="0"' 00:13:33.355 18:11:43 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[mec]=0 00:13:33.355 18:11:43 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:33.355 18:11:43 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:33.355 18:11:43 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x12a ]] 00:13:33.355 18:11:43 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[oacs]="0x12a"' 00:13:33.355 18:11:43 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[oacs]=0x12a 00:13:33.355 18:11:43 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:33.355 18:11:43 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:33.355 18:11:43 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 3 ]] 00:13:33.355 18:11:43 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[acl]="3"' 00:13:33.355 18:11:43 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[acl]=3 00:13:33.355 18:11:43 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:33.355 18:11:43 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:33.355 18:11:43 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 3 ]] 00:13:33.355 18:11:43 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[aerl]="3"' 00:13:33.355 18:11:43 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[aerl]=3 00:13:33.355 18:11:43 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:33.355 18:11:43 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:33.355 18:11:43 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:13:33.355 18:11:43 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[frmw]="0x3"' 00:13:33.355 18:11:43 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[frmw]=0x3 00:13:33.355 18:11:43 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:33.355 18:11:43 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:33.355 18:11:43 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x7 ]] 00:13:33.355 18:11:43 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[lpa]="0x7"' 00:13:33.355 18:11:43 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[lpa]=0x7 00:13:33.355 18:11:43 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:33.355 18:11:43 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:33.355 18:11:43 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:33.355 18:11:43 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[elpe]="0"' 00:13:33.355 18:11:43 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[elpe]=0 00:13:33.355 18:11:43 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:33.355 18:11:43 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:33.355 18:11:43 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:33.355 18:11:43 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[npss]="0"' 00:13:33.355 18:11:43 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[npss]=0 00:13:33.355 18:11:43 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:33.355 18:11:43 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:33.355 18:11:43 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:33.355 18:11:43 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[avscc]="0"' 00:13:33.355 18:11:43 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[avscc]=0 00:13:33.355 18:11:43 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:33.355 18:11:43 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:33.355 18:11:43 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:33.355 18:11:43 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[apsta]="0"' 00:13:33.355 18:11:43 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[apsta]=0 00:13:33.355 18:11:43 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:33.355 18:11:43 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:33.355 18:11:43 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 343 ]] 00:13:33.355 18:11:43 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[wctemp]="343"' 00:13:33.355 18:11:43 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[wctemp]=343 00:13:33.355 18:11:43 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:33.355 18:11:43 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:33.355 18:11:43 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 373 ]] 00:13:33.355 18:11:43 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[cctemp]="373"' 00:13:33.355 18:11:43 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[cctemp]=373 00:13:33.355 18:11:43 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:33.355 18:11:43 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:33.355 18:11:43 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:33.355 18:11:43 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[mtfa]="0"' 00:13:33.355 18:11:43 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[mtfa]=0 00:13:33.355 18:11:43 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:33.355 18:11:43 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:33.355 18:11:43 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:33.355 18:11:43 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[hmpre]="0"' 00:13:33.355 18:11:43 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[hmpre]=0 00:13:33.355 18:11:43 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:33.355 18:11:43 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:33.355 18:11:43 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:33.355 18:11:43 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[hmmin]="0"' 00:13:33.355 18:11:43 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[hmmin]=0 00:13:33.355 18:11:43 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:33.355 18:11:43 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:33.355 18:11:43 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:33.355 18:11:43 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[tnvmcap]="0"' 00:13:33.355 18:11:43 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[tnvmcap]=0 00:13:33.355 18:11:43 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:33.355 18:11:43 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:33.355 18:11:43 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:33.355 18:11:43 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[unvmcap]="0"' 00:13:33.355 18:11:43 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[unvmcap]=0 00:13:33.355 18:11:43 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:33.355 18:11:43 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:33.355 18:11:43 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:33.355 18:11:43 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[rpmbs]="0"' 00:13:33.355 18:11:43 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[rpmbs]=0 00:13:33.355 18:11:43 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:33.355 18:11:43 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:33.355 18:11:43 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:33.355 18:11:43 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[edstt]="0"' 00:13:33.355 18:11:43 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[edstt]=0 00:13:33.355 18:11:43 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:33.355 18:11:43 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:33.355 18:11:43 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:33.355 18:11:43 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[dsto]="0"' 00:13:33.355 18:11:43 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[dsto]=0 00:13:33.355 18:11:43 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:33.356 18:11:43 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:33.356 18:11:43 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:33.356 18:11:43 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[fwug]="0"' 00:13:33.356 18:11:43 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[fwug]=0 00:13:33.356 18:11:43 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:33.356 18:11:43 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:33.356 18:11:43 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:33.356 18:11:43 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[kas]="0"' 00:13:33.356 18:11:43 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[kas]=0 00:13:33.356 18:11:43 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:33.356 18:11:43 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:33.356 18:11:43 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:33.356 18:11:43 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[hctma]="0"' 00:13:33.356 18:11:43 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[hctma]=0 00:13:33.356 18:11:43 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:33.356 18:11:43 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:33.356 18:11:43 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:33.356 18:11:43 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[mntmt]="0"' 00:13:33.356 18:11:43 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[mntmt]=0 00:13:33.356 18:11:43 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:33.356 18:11:43 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:33.356 18:11:43 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:33.356 18:11:43 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[mxtmt]="0"' 00:13:33.356 18:11:43 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[mxtmt]=0 00:13:33.356 18:11:43 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:33.356 18:11:43 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:33.356 18:11:43 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:33.356 18:11:43 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[sanicap]="0"' 00:13:33.356 18:11:43 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[sanicap]=0 00:13:33.356 18:11:43 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:33.356 18:11:43 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:33.356 18:11:43 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:33.356 18:11:43 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[hmminds]="0"' 00:13:33.356 18:11:43 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[hmminds]=0 00:13:33.356 18:11:43 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:33.356 18:11:43 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:33.356 18:11:43 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:33.356 18:11:43 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[hmmaxd]="0"' 00:13:33.356 18:11:43 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[hmmaxd]=0 00:13:33.356 18:11:43 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:33.356 18:11:43 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:33.356 18:11:43 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:33.356 18:11:43 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[nsetidmax]="0"' 00:13:33.356 18:11:43 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[nsetidmax]=0 00:13:33.356 18:11:43 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:33.356 18:11:43 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:33.356 18:11:43 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:13:33.356 18:11:43 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[endgidmax]="1"' 00:13:33.356 18:11:43 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[endgidmax]=1 00:13:33.356 18:11:43 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:33.356 18:11:43 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:33.356 18:11:43 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:33.356 18:11:43 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[anatt]="0"' 00:13:33.356 18:11:43 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[anatt]=0 00:13:33.356 18:11:43 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:33.356 18:11:43 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:33.356 18:11:43 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:33.356 18:11:43 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[anacap]="0"' 00:13:33.356 18:11:43 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[anacap]=0 00:13:33.356 18:11:43 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:33.356 18:11:43 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:33.356 18:11:43 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:33.356 18:11:43 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[anagrpmax]="0"' 00:13:33.356 18:11:43 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[anagrpmax]=0 00:13:33.356 18:11:43 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:33.356 18:11:43 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:33.356 18:11:43 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:33.356 18:11:43 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[nanagrpid]="0"' 00:13:33.356 18:11:43 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[nanagrpid]=0 00:13:33.356 18:11:43 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:33.356 18:11:43 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:33.356 18:11:43 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:33.356 18:11:43 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[pels]="0"' 00:13:33.356 18:11:43 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[pels]=0 00:13:33.356 18:11:43 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:33.356 18:11:43 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:33.356 18:11:43 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:33.356 18:11:43 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[domainid]="0"' 00:13:33.356 18:11:43 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[domainid]=0 00:13:33.356 18:11:43 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:33.356 18:11:43 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:33.356 18:11:43 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:33.356 18:11:43 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[megcap]="0"' 00:13:33.356 18:11:43 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[megcap]=0 00:13:33.356 18:11:43 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:33.356 18:11:43 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:33.356 18:11:43 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x66 ]] 00:13:33.356 18:11:43 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[sqes]="0x66"' 00:13:33.356 18:11:43 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[sqes]=0x66 00:13:33.356 18:11:43 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:33.356 18:11:43 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:33.356 18:11:43 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x44 ]] 00:13:33.356 18:11:43 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[cqes]="0x44"' 00:13:33.356 18:11:43 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[cqes]=0x44 00:13:33.356 18:11:43 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:33.356 18:11:43 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:33.356 18:11:43 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:33.356 18:11:43 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[maxcmd]="0"' 00:13:33.356 18:11:43 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[maxcmd]=0 00:13:33.356 18:11:43 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:33.356 18:11:43 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:33.356 18:11:43 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 256 ]] 00:13:33.356 18:11:43 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[nn]="256"' 00:13:33.356 18:11:43 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[nn]=256 00:13:33.356 18:11:43 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:33.356 18:11:43 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:33.356 18:11:43 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x15d ]] 00:13:33.356 18:11:43 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[oncs]="0x15d"' 00:13:33.356 18:11:43 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[oncs]=0x15d 00:13:33.356 18:11:43 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:33.356 18:11:43 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:33.356 18:11:43 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:33.356 18:11:43 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[fuses]="0"' 00:13:33.356 18:11:43 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[fuses]=0 00:13:33.356 18:11:43 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:33.356 18:11:43 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:33.356 18:11:43 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:33.356 18:11:43 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[fna]="0"' 00:13:33.356 18:11:43 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[fna]=0 00:13:33.356 18:11:43 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:33.356 18:11:43 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:33.356 18:11:43 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x7 ]] 00:13:33.356 18:11:43 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[vwc]="0x7"' 00:13:33.356 18:11:43 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[vwc]=0x7 00:13:33.356 18:11:43 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:33.356 18:11:43 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:33.356 18:11:43 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:33.356 18:11:43 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[awun]="0"' 00:13:33.356 18:11:43 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[awun]=0 00:13:33.356 18:11:43 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:33.356 18:11:43 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:33.356 18:11:43 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:33.356 18:11:43 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[awupf]="0"' 00:13:33.356 18:11:43 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[awupf]=0 00:13:33.356 18:11:43 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:33.356 18:11:43 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:33.356 18:11:43 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:33.356 18:11:43 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[icsvscc]="0"' 00:13:33.356 18:11:43 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[icsvscc]=0 00:13:33.356 18:11:43 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:33.356 18:11:43 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:33.356 18:11:43 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:33.356 18:11:43 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[nwpc]="0"' 00:13:33.356 18:11:43 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[nwpc]=0 00:13:33.356 18:11:43 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:33.356 18:11:43 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:33.356 18:11:43 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:33.356 18:11:43 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[acwu]="0"' 00:13:33.356 18:11:43 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[acwu]=0 00:13:33.356 18:11:43 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:33.356 18:11:43 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:33.356 18:11:43 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:13:33.356 18:11:43 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[ocfs]="0x3"' 00:13:33.356 18:11:43 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[ocfs]=0x3 00:13:33.356 18:11:43 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:33.356 18:11:43 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:33.356 18:11:43 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x1 ]] 00:13:33.356 18:11:43 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[sgls]="0x1"' 00:13:33.357 18:11:43 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[sgls]=0x1 00:13:33.357 18:11:43 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:33.357 18:11:43 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:33.357 18:11:43 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:33.357 18:11:43 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[mnan]="0"' 00:13:33.357 18:11:43 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[mnan]=0 00:13:33.357 18:11:43 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:33.357 18:11:43 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:33.357 18:11:43 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:33.357 18:11:43 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[maxdna]="0"' 00:13:33.357 18:11:43 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[maxdna]=0 00:13:33.357 18:11:43 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:33.357 18:11:43 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:33.357 18:11:43 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:33.357 18:11:43 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[maxcna]="0"' 00:13:33.357 18:11:43 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[maxcna]=0 00:13:33.357 18:11:43 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:33.357 18:11:43 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:33.357 18:11:43 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n nqn.2019-08.org.qemu:fdp-subsys3 ]] 00:13:33.357 18:11:43 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[subnqn]="nqn.2019-08.org.qemu:fdp-subsys3"' 00:13:33.357 18:11:43 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[subnqn]=nqn.2019-08.org.qemu:fdp-subsys3 00:13:33.357 18:11:43 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:33.357 18:11:43 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:33.357 18:11:43 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:33.357 18:11:43 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[ioccsz]="0"' 00:13:33.357 18:11:43 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[ioccsz]=0 00:13:33.357 18:11:43 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:33.357 18:11:43 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:33.357 18:11:43 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:33.357 18:11:43 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[iorcsz]="0"' 00:13:33.357 18:11:43 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[iorcsz]=0 00:13:33.357 18:11:43 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:33.357 18:11:43 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:33.357 18:11:43 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:33.357 18:11:43 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[icdoff]="0"' 00:13:33.357 18:11:43 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[icdoff]=0 00:13:33.357 18:11:43 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:33.357 18:11:43 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:33.357 18:11:43 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:33.357 18:11:43 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[fcatt]="0"' 00:13:33.357 18:11:43 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[fcatt]=0 00:13:33.357 18:11:43 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:33.357 18:11:43 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:33.357 18:11:43 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:33.357 18:11:43 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[msdbd]="0"' 00:13:33.357 18:11:43 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[msdbd]=0 00:13:33.357 18:11:43 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:33.357 18:11:43 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:33.357 18:11:43 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:33.357 18:11:43 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[ofcs]="0"' 00:13:33.357 18:11:43 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[ofcs]=0 00:13:33.357 18:11:43 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:33.357 18:11:43 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:33.357 18:11:43 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0 ]] 00:13:33.357 18:11:43 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[ps0]="mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0"' 00:13:33.357 18:11:43 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[ps0]='mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0' 00:13:33.357 18:11:43 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:33.357 18:11:43 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:33.357 18:11:43 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 rwl:0 idle_power:- active_power:- ]] 00:13:33.357 18:11:43 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[rwt]="0 rwl:0 idle_power:- active_power:-"' 00:13:33.357 18:11:43 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[rwt]='0 rwl:0 idle_power:- active_power:-' 00:13:33.357 18:11:43 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:33.357 18:11:43 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:33.357 18:11:43 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n - ]] 00:13:33.357 18:11:43 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[active_power_workload]="-"' 00:13:33.357 18:11:43 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[active_power_workload]=- 00:13:33.357 18:11:43 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:33.357 18:11:43 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:33.357 18:11:43 nvme_fdp -- nvme/functions.sh@53 -- # local -n _ctrl_ns=nvme3_ns 00:13:33.357 18:11:43 nvme_fdp -- nvme/functions.sh@60 -- # ctrls["$ctrl_dev"]=nvme3 00:13:33.357 18:11:43 nvme_fdp -- nvme/functions.sh@61 -- # nvmes["$ctrl_dev"]=nvme3_ns 00:13:33.357 18:11:43 nvme_fdp -- nvme/functions.sh@62 -- # bdfs["$ctrl_dev"]=0000:00:13.0 00:13:33.357 18:11:43 nvme_fdp -- nvme/functions.sh@63 -- # ordered_ctrls[${ctrl_dev/nvme/}]=nvme3 00:13:33.357 18:11:43 nvme_fdp -- nvme/functions.sh@65 -- # (( 4 > 0 )) 00:13:33.357 18:11:43 nvme_fdp -- nvme/nvme_fdp.sh@13 -- # get_ctrl_with_feature fdp 00:13:33.357 18:11:43 nvme_fdp -- nvme/functions.sh@204 -- # local _ctrls feature=fdp 00:13:33.357 18:11:43 nvme_fdp -- nvme/functions.sh@206 -- # _ctrls=($(get_ctrls_with_feature "$feature")) 00:13:33.357 18:11:43 nvme_fdp -- nvme/functions.sh@206 -- # get_ctrls_with_feature fdp 00:13:33.357 18:11:43 nvme_fdp -- nvme/functions.sh@192 -- # (( 4 == 0 )) 00:13:33.357 18:11:43 nvme_fdp -- nvme/functions.sh@194 -- # local ctrl feature=fdp 00:13:33.357 18:11:43 nvme_fdp -- nvme/functions.sh@196 -- # type -t ctrl_has_fdp 00:13:33.357 18:11:43 nvme_fdp -- nvme/functions.sh@196 -- # [[ function == function ]] 00:13:33.357 18:11:43 nvme_fdp -- nvme/functions.sh@198 -- # for ctrl in "${!ctrls[@]}" 00:13:33.357 18:11:43 nvme_fdp -- nvme/functions.sh@199 -- # ctrl_has_fdp nvme1 00:13:33.357 18:11:43 nvme_fdp -- nvme/functions.sh@176 -- # local ctrl=nvme1 ctratt 00:13:33.357 18:11:43 nvme_fdp -- nvme/functions.sh@178 -- # get_ctratt nvme1 00:13:33.357 18:11:43 nvme_fdp -- nvme/functions.sh@166 -- # local ctrl=nvme1 00:13:33.357 18:11:43 nvme_fdp -- nvme/functions.sh@167 -- # get_nvme_ctrl_feature nvme1 ctratt 00:13:33.357 18:11:43 nvme_fdp -- nvme/functions.sh@69 -- # local ctrl=nvme1 reg=ctratt 00:13:33.357 18:11:43 nvme_fdp -- nvme/functions.sh@71 -- # [[ -n nvme1 ]] 00:13:33.357 18:11:43 nvme_fdp -- nvme/functions.sh@73 -- # local -n _ctrl=nvme1 00:13:33.357 18:11:43 nvme_fdp -- nvme/functions.sh@75 -- # [[ -n 0x8000 ]] 00:13:33.357 18:11:43 nvme_fdp -- nvme/functions.sh@76 -- # echo 0x8000 00:13:33.357 18:11:43 nvme_fdp -- nvme/functions.sh@178 -- # ctratt=0x8000 00:13:33.357 18:11:43 nvme_fdp -- nvme/functions.sh@180 -- # (( ctratt & 1 << 19 )) 00:13:33.357 18:11:43 nvme_fdp -- nvme/functions.sh@198 -- # for ctrl in "${!ctrls[@]}" 00:13:33.357 18:11:43 nvme_fdp -- nvme/functions.sh@199 -- # ctrl_has_fdp nvme0 00:13:33.357 18:11:43 nvme_fdp -- nvme/functions.sh@176 -- # local ctrl=nvme0 ctratt 00:13:33.357 18:11:43 nvme_fdp -- nvme/functions.sh@178 -- # get_ctratt nvme0 00:13:33.357 18:11:43 nvme_fdp -- nvme/functions.sh@166 -- # local ctrl=nvme0 00:13:33.357 18:11:43 nvme_fdp -- nvme/functions.sh@167 -- # get_nvme_ctrl_feature nvme0 ctratt 00:13:33.357 18:11:43 nvme_fdp -- nvme/functions.sh@69 -- # local ctrl=nvme0 reg=ctratt 00:13:33.357 18:11:43 nvme_fdp -- nvme/functions.sh@71 -- # [[ -n nvme0 ]] 00:13:33.357 18:11:43 nvme_fdp -- nvme/functions.sh@73 -- # local -n _ctrl=nvme0 00:13:33.357 18:11:43 nvme_fdp -- nvme/functions.sh@75 -- # [[ -n 0x8000 ]] 00:13:33.357 18:11:43 nvme_fdp -- nvme/functions.sh@76 -- # echo 0x8000 00:13:33.357 18:11:43 nvme_fdp -- nvme/functions.sh@178 -- # ctratt=0x8000 00:13:33.357 18:11:43 nvme_fdp -- nvme/functions.sh@180 -- # (( ctratt & 1 << 19 )) 00:13:33.357 18:11:43 nvme_fdp -- nvme/functions.sh@198 -- # for ctrl in "${!ctrls[@]}" 00:13:33.357 18:11:43 nvme_fdp -- nvme/functions.sh@199 -- # ctrl_has_fdp nvme3 00:13:33.357 18:11:43 nvme_fdp -- nvme/functions.sh@176 -- # local ctrl=nvme3 ctratt 00:13:33.357 18:11:43 nvme_fdp -- nvme/functions.sh@178 -- # get_ctratt nvme3 00:13:33.357 18:11:43 nvme_fdp -- nvme/functions.sh@166 -- # local ctrl=nvme3 00:13:33.357 18:11:43 nvme_fdp -- nvme/functions.sh@167 -- # get_nvme_ctrl_feature nvme3 ctratt 00:13:33.357 18:11:43 nvme_fdp -- nvme/functions.sh@69 -- # local ctrl=nvme3 reg=ctratt 00:13:33.357 18:11:43 nvme_fdp -- nvme/functions.sh@71 -- # [[ -n nvme3 ]] 00:13:33.357 18:11:43 nvme_fdp -- nvme/functions.sh@73 -- # local -n _ctrl=nvme3 00:13:33.357 18:11:43 nvme_fdp -- nvme/functions.sh@75 -- # [[ -n 0x88010 ]] 00:13:33.357 18:11:43 nvme_fdp -- nvme/functions.sh@76 -- # echo 0x88010 00:13:33.357 18:11:43 nvme_fdp -- nvme/functions.sh@178 -- # ctratt=0x88010 00:13:33.357 18:11:43 nvme_fdp -- nvme/functions.sh@180 -- # (( ctratt & 1 << 19 )) 00:13:33.357 18:11:43 nvme_fdp -- nvme/functions.sh@199 -- # echo nvme3 00:13:33.357 18:11:43 nvme_fdp -- nvme/functions.sh@198 -- # for ctrl in "${!ctrls[@]}" 00:13:33.357 18:11:43 nvme_fdp -- nvme/functions.sh@199 -- # ctrl_has_fdp nvme2 00:13:33.357 18:11:43 nvme_fdp -- nvme/functions.sh@176 -- # local ctrl=nvme2 ctratt 00:13:33.357 18:11:43 nvme_fdp -- nvme/functions.sh@178 -- # get_ctratt nvme2 00:13:33.357 18:11:43 nvme_fdp -- nvme/functions.sh@166 -- # local ctrl=nvme2 00:13:33.357 18:11:43 nvme_fdp -- nvme/functions.sh@167 -- # get_nvme_ctrl_feature nvme2 ctratt 00:13:33.357 18:11:43 nvme_fdp -- nvme/functions.sh@69 -- # local ctrl=nvme2 reg=ctratt 00:13:33.357 18:11:43 nvme_fdp -- nvme/functions.sh@71 -- # [[ -n nvme2 ]] 00:13:33.357 18:11:43 nvme_fdp -- nvme/functions.sh@73 -- # local -n _ctrl=nvme2 00:13:33.357 18:11:43 nvme_fdp -- nvme/functions.sh@75 -- # [[ -n 0x8000 ]] 00:13:33.357 18:11:43 nvme_fdp -- nvme/functions.sh@76 -- # echo 0x8000 00:13:33.357 18:11:43 nvme_fdp -- nvme/functions.sh@178 -- # ctratt=0x8000 00:13:33.357 18:11:43 nvme_fdp -- nvme/functions.sh@180 -- # (( ctratt & 1 << 19 )) 00:13:33.357 18:11:43 nvme_fdp -- nvme/functions.sh@207 -- # (( 1 > 0 )) 00:13:33.357 18:11:43 nvme_fdp -- nvme/functions.sh@208 -- # echo nvme3 00:13:33.357 18:11:43 nvme_fdp -- nvme/functions.sh@209 -- # return 0 00:13:33.357 18:11:43 nvme_fdp -- nvme/nvme_fdp.sh@13 -- # ctrl=nvme3 00:13:33.357 18:11:43 nvme_fdp -- nvme/nvme_fdp.sh@14 -- # bdf=0000:00:13.0 00:13:33.357 18:11:43 nvme_fdp -- nvme/nvme_fdp.sh@16 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:13:33.953 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:13:34.521 0000:00:10.0 (1b36 0010): nvme -> uio_pci_generic 00:13:34.521 0000:00:13.0 (1b36 0010): nvme -> uio_pci_generic 00:13:34.521 0000:00:11.0 (1b36 0010): nvme -> uio_pci_generic 00:13:34.521 0000:00:12.0 (1b36 0010): nvme -> uio_pci_generic 00:13:34.779 18:11:45 nvme_fdp -- nvme/nvme_fdp.sh@18 -- # run_test nvme_flexible_data_placement /home/vagrant/spdk_repo/spdk/test/nvme/fdp/fdp -r 'trtype:pcie traddr:0000:00:13.0' 00:13:34.779 18:11:45 nvme_fdp -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:13:34.779 18:11:45 nvme_fdp -- common/autotest_common.sh@1111 -- # xtrace_disable 00:13:34.779 18:11:45 nvme_fdp -- common/autotest_common.sh@10 -- # set +x 00:13:34.779 ************************************ 00:13:34.779 START TEST nvme_flexible_data_placement 00:13:34.779 ************************************ 00:13:34.779 18:11:45 nvme_fdp.nvme_flexible_data_placement -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/nvme/fdp/fdp -r 'trtype:pcie traddr:0000:00:13.0' 00:13:35.038 Initializing NVMe Controllers 00:13:35.038 Attaching to 0000:00:13.0 00:13:35.038 Controller supports FDP Attached to 0000:00:13.0 00:13:35.038 Namespace ID: 1 Endurance Group ID: 1 00:13:35.038 Initialization complete. 00:13:35.038 00:13:35.038 ================================== 00:13:35.038 == FDP tests for Namespace: #01 == 00:13:35.038 ================================== 00:13:35.038 00:13:35.038 Get Feature: FDP: 00:13:35.038 ================= 00:13:35.038 Enabled: Yes 00:13:35.038 FDP configuration Index: 0 00:13:35.038 00:13:35.038 FDP configurations log page 00:13:35.038 =========================== 00:13:35.038 Number of FDP configurations: 1 00:13:35.038 Version: 0 00:13:35.038 Size: 112 00:13:35.038 FDP Configuration Descriptor: 0 00:13:35.038 Descriptor Size: 96 00:13:35.038 Reclaim Group Identifier format: 2 00:13:35.038 FDP Volatile Write Cache: Not Present 00:13:35.038 FDP Configuration: Valid 00:13:35.038 Vendor Specific Size: 0 00:13:35.038 Number of Reclaim Groups: 2 00:13:35.038 Number of Recalim Unit Handles: 8 00:13:35.038 Max Placement Identifiers: 128 00:13:35.038 Number of Namespaces Suppprted: 256 00:13:35.038 Reclaim unit Nominal Size: 6000000 bytes 00:13:35.038 Estimated Reclaim Unit Time Limit: Not Reported 00:13:35.038 RUH Desc #000: RUH Type: Initially Isolated 00:13:35.038 RUH Desc #001: RUH Type: Initially Isolated 00:13:35.038 RUH Desc #002: RUH Type: Initially Isolated 00:13:35.038 RUH Desc #003: RUH Type: Initially Isolated 00:13:35.038 RUH Desc #004: RUH Type: Initially Isolated 00:13:35.038 RUH Desc #005: RUH Type: Initially Isolated 00:13:35.038 RUH Desc #006: RUH Type: Initially Isolated 00:13:35.038 RUH Desc #007: RUH Type: Initially Isolated 00:13:35.038 00:13:35.038 FDP reclaim unit handle usage log page 00:13:35.038 ====================================== 00:13:35.038 Number of Reclaim Unit Handles: 8 00:13:35.038 RUH Usage Desc #000: RUH Attributes: Controller Specified 00:13:35.038 RUH Usage Desc #001: RUH Attributes: Unused 00:13:35.038 RUH Usage Desc #002: RUH Attributes: Unused 00:13:35.038 RUH Usage Desc #003: RUH Attributes: Unused 00:13:35.038 RUH Usage Desc #004: RUH Attributes: Unused 00:13:35.038 RUH Usage Desc #005: RUH Attributes: Unused 00:13:35.038 RUH Usage Desc #006: RUH Attributes: Unused 00:13:35.038 RUH Usage Desc #007: RUH Attributes: Unused 00:13:35.038 00:13:35.038 FDP statistics log page 00:13:35.038 ======================= 00:13:35.038 Host bytes with metadata written: 996753408 00:13:35.038 Media bytes with metadata written: 999108608 00:13:35.038 Media bytes erased: 0 00:13:35.038 00:13:35.038 FDP Reclaim unit handle status 00:13:35.038 ============================== 00:13:35.038 Number of RUHS descriptors: 2 00:13:35.038 RUHS Desc: #0000 PID: 0x0000 RUHID: 0x0000 ERUT: 0x00000000 RUAMW: 0x000000000000096c 00:13:35.038 RUHS Desc: #0001 PID: 0x4000 RUHID: 0x0000 ERUT: 0x00000000 RUAMW: 0x0000000000006000 00:13:35.038 00:13:35.038 FDP write on placement id: 0 success 00:13:35.038 00:13:35.038 Set Feature: Enabling FDP events on Placement handle: #0 Success 00:13:35.038 00:13:35.038 IO mgmt send: RUH update for Placement ID: #0 Success 00:13:35.038 00:13:35.038 Get Feature: FDP Events for Placement handle: #0 00:13:35.038 ======================== 00:13:35.038 Number of FDP Events: 6 00:13:35.038 FDP Event: #0 Type: RU Not Written to Capacity Enabled: Yes 00:13:35.038 FDP Event: #1 Type: RU Time Limit Exceeded Enabled: Yes 00:13:35.038 FDP Event: #2 Type: Ctrlr Reset Modified RUH's Enabled: Yes 00:13:35.038 FDP Event: #3 Type: Invalid Placement Identifier Enabled: Yes 00:13:35.038 FDP Event: #4 Type: Media Reallocated Enabled: No 00:13:35.038 FDP Event: #5 Type: Implicitly modified RUH Enabled: No 00:13:35.038 00:13:35.038 FDP events log page 00:13:35.038 =================== 00:13:35.038 Number of FDP events: 1 00:13:35.038 FDP Event #0: 00:13:35.038 Event Type: RU Not Written to Capacity 00:13:35.038 Placement Identifier: Valid 00:13:35.038 NSID: Valid 00:13:35.038 Location: Valid 00:13:35.038 Placement Identifier: 0 00:13:35.038 Event Timestamp: 7 00:13:35.038 Namespace Identifier: 1 00:13:35.038 Reclaim Group Identifier: 0 00:13:35.038 Reclaim Unit Handle Identifier: 0 00:13:35.038 00:13:35.038 FDP test passed 00:13:35.038 00:13:35.038 real 0m0.292s 00:13:35.038 user 0m0.102s 00:13:35.038 sys 0m0.088s 00:13:35.038 18:11:45 nvme_fdp.nvme_flexible_data_placement -- common/autotest_common.sh@1130 -- # xtrace_disable 00:13:35.038 18:11:45 nvme_fdp.nvme_flexible_data_placement -- common/autotest_common.sh@10 -- # set +x 00:13:35.038 ************************************ 00:13:35.038 END TEST nvme_flexible_data_placement 00:13:35.038 ************************************ 00:13:35.038 00:13:35.038 real 0m8.885s 00:13:35.038 user 0m1.591s 00:13:35.038 sys 0m2.385s 00:13:35.038 18:11:45 nvme_fdp -- common/autotest_common.sh@1130 -- # xtrace_disable 00:13:35.038 ************************************ 00:13:35.038 18:11:45 nvme_fdp -- common/autotest_common.sh@10 -- # set +x 00:13:35.038 END TEST nvme_fdp 00:13:35.038 ************************************ 00:13:35.296 18:11:45 -- spdk/autotest.sh@232 -- # [[ '' -eq 1 ]] 00:13:35.296 18:11:45 -- spdk/autotest.sh@236 -- # run_test nvme_rpc /home/vagrant/spdk_repo/spdk/test/nvme/nvme_rpc.sh 00:13:35.296 18:11:45 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:13:35.296 18:11:45 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:13:35.296 18:11:45 -- common/autotest_common.sh@10 -- # set +x 00:13:35.296 ************************************ 00:13:35.296 START TEST nvme_rpc 00:13:35.296 ************************************ 00:13:35.296 18:11:45 nvme_rpc -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/nvme/nvme_rpc.sh 00:13:35.296 * Looking for test storage... 00:13:35.296 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvme 00:13:35.296 18:11:45 nvme_rpc -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:13:35.296 18:11:45 nvme_rpc -- common/autotest_common.sh@1711 -- # lcov --version 00:13:35.296 18:11:45 nvme_rpc -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:13:35.296 18:11:45 nvme_rpc -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:13:35.296 18:11:45 nvme_rpc -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:13:35.296 18:11:45 nvme_rpc -- scripts/common.sh@333 -- # local ver1 ver1_l 00:13:35.296 18:11:45 nvme_rpc -- scripts/common.sh@334 -- # local ver2 ver2_l 00:13:35.296 18:11:45 nvme_rpc -- scripts/common.sh@336 -- # IFS=.-: 00:13:35.296 18:11:45 nvme_rpc -- scripts/common.sh@336 -- # read -ra ver1 00:13:35.296 18:11:45 nvme_rpc -- scripts/common.sh@337 -- # IFS=.-: 00:13:35.296 18:11:45 nvme_rpc -- scripts/common.sh@337 -- # read -ra ver2 00:13:35.296 18:11:45 nvme_rpc -- scripts/common.sh@338 -- # local 'op=<' 00:13:35.296 18:11:45 nvme_rpc -- scripts/common.sh@340 -- # ver1_l=2 00:13:35.296 18:11:45 nvme_rpc -- scripts/common.sh@341 -- # ver2_l=1 00:13:35.296 18:11:45 nvme_rpc -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:13:35.296 18:11:45 nvme_rpc -- scripts/common.sh@344 -- # case "$op" in 00:13:35.296 18:11:45 nvme_rpc -- scripts/common.sh@345 -- # : 1 00:13:35.296 18:11:45 nvme_rpc -- scripts/common.sh@364 -- # (( v = 0 )) 00:13:35.296 18:11:45 nvme_rpc -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:13:35.296 18:11:45 nvme_rpc -- scripts/common.sh@365 -- # decimal 1 00:13:35.555 18:11:45 nvme_rpc -- scripts/common.sh@353 -- # local d=1 00:13:35.555 18:11:45 nvme_rpc -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:13:35.555 18:11:45 nvme_rpc -- scripts/common.sh@355 -- # echo 1 00:13:35.555 18:11:45 nvme_rpc -- scripts/common.sh@365 -- # ver1[v]=1 00:13:35.555 18:11:45 nvme_rpc -- scripts/common.sh@366 -- # decimal 2 00:13:35.555 18:11:45 nvme_rpc -- scripts/common.sh@353 -- # local d=2 00:13:35.555 18:11:45 nvme_rpc -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:13:35.555 18:11:45 nvme_rpc -- scripts/common.sh@355 -- # echo 2 00:13:35.555 18:11:45 nvme_rpc -- scripts/common.sh@366 -- # ver2[v]=2 00:13:35.555 18:11:45 nvme_rpc -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:13:35.555 18:11:45 nvme_rpc -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:13:35.555 18:11:45 nvme_rpc -- scripts/common.sh@368 -- # return 0 00:13:35.555 18:11:45 nvme_rpc -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:13:35.555 18:11:45 nvme_rpc -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:13:35.555 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:13:35.555 --rc genhtml_branch_coverage=1 00:13:35.555 --rc genhtml_function_coverage=1 00:13:35.555 --rc genhtml_legend=1 00:13:35.555 --rc geninfo_all_blocks=1 00:13:35.555 --rc geninfo_unexecuted_blocks=1 00:13:35.555 00:13:35.555 ' 00:13:35.555 18:11:45 nvme_rpc -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:13:35.555 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:13:35.555 --rc genhtml_branch_coverage=1 00:13:35.555 --rc genhtml_function_coverage=1 00:13:35.555 --rc genhtml_legend=1 00:13:35.555 --rc geninfo_all_blocks=1 00:13:35.555 --rc geninfo_unexecuted_blocks=1 00:13:35.555 00:13:35.555 ' 00:13:35.555 18:11:45 nvme_rpc -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:13:35.555 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:13:35.555 --rc genhtml_branch_coverage=1 00:13:35.555 --rc genhtml_function_coverage=1 00:13:35.555 --rc genhtml_legend=1 00:13:35.555 --rc geninfo_all_blocks=1 00:13:35.555 --rc geninfo_unexecuted_blocks=1 00:13:35.555 00:13:35.555 ' 00:13:35.555 18:11:45 nvme_rpc -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:13:35.555 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:13:35.555 --rc genhtml_branch_coverage=1 00:13:35.555 --rc genhtml_function_coverage=1 00:13:35.555 --rc genhtml_legend=1 00:13:35.555 --rc geninfo_all_blocks=1 00:13:35.555 --rc geninfo_unexecuted_blocks=1 00:13:35.555 00:13:35.555 ' 00:13:35.555 18:11:45 nvme_rpc -- nvme/nvme_rpc.sh@11 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:13:35.555 18:11:45 nvme_rpc -- nvme/nvme_rpc.sh@13 -- # get_first_nvme_bdf 00:13:35.555 18:11:45 nvme_rpc -- common/autotest_common.sh@1509 -- # bdfs=() 00:13:35.555 18:11:45 nvme_rpc -- common/autotest_common.sh@1509 -- # local bdfs 00:13:35.555 18:11:45 nvme_rpc -- common/autotest_common.sh@1510 -- # bdfs=($(get_nvme_bdfs)) 00:13:35.555 18:11:45 nvme_rpc -- common/autotest_common.sh@1510 -- # get_nvme_bdfs 00:13:35.555 18:11:45 nvme_rpc -- common/autotest_common.sh@1498 -- # bdfs=() 00:13:35.555 18:11:45 nvme_rpc -- common/autotest_common.sh@1498 -- # local bdfs 00:13:35.555 18:11:45 nvme_rpc -- common/autotest_common.sh@1499 -- # bdfs=($("$rootdir/scripts/gen_nvme.sh" | jq -r '.config[].params.traddr')) 00:13:35.555 18:11:45 nvme_rpc -- common/autotest_common.sh@1499 -- # /home/vagrant/spdk_repo/spdk/scripts/gen_nvme.sh 00:13:35.555 18:11:45 nvme_rpc -- common/autotest_common.sh@1499 -- # jq -r '.config[].params.traddr' 00:13:35.555 18:11:46 nvme_rpc -- common/autotest_common.sh@1500 -- # (( 4 == 0 )) 00:13:35.556 18:11:46 nvme_rpc -- common/autotest_common.sh@1504 -- # printf '%s\n' 0000:00:10.0 0000:00:11.0 0000:00:12.0 0000:00:13.0 00:13:35.556 18:11:46 nvme_rpc -- common/autotest_common.sh@1512 -- # echo 0000:00:10.0 00:13:35.556 18:11:46 nvme_rpc -- nvme/nvme_rpc.sh@13 -- # bdf=0000:00:10.0 00:13:35.556 18:11:46 nvme_rpc -- nvme/nvme_rpc.sh@16 -- # spdk_tgt_pid=67111 00:13:35.556 18:11:46 nvme_rpc -- nvme/nvme_rpc.sh@15 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x3 00:13:35.556 18:11:46 nvme_rpc -- nvme/nvme_rpc.sh@17 -- # trap 'kill -9 ${spdk_tgt_pid}; exit 1' SIGINT SIGTERM EXIT 00:13:35.556 18:11:46 nvme_rpc -- nvme/nvme_rpc.sh@19 -- # waitforlisten 67111 00:13:35.556 18:11:46 nvme_rpc -- common/autotest_common.sh@835 -- # '[' -z 67111 ']' 00:13:35.556 18:11:46 nvme_rpc -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:13:35.556 18:11:46 nvme_rpc -- common/autotest_common.sh@840 -- # local max_retries=100 00:13:35.556 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:13:35.556 18:11:46 nvme_rpc -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:13:35.556 18:11:46 nvme_rpc -- common/autotest_common.sh@844 -- # xtrace_disable 00:13:35.556 18:11:46 nvme_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:35.556 [2024-12-06 18:11:46.125795] Starting SPDK v25.01-pre git sha1 1148849d6 / DPDK 24.03.0 initialization... 00:13:35.556 [2024-12-06 18:11:46.126001] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid67111 ] 00:13:35.814 [2024-12-06 18:11:46.300217] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:13:36.072 [2024-12-06 18:11:46.422028] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:13:36.072 [2024-12-06 18:11:46.422068] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:13:37.009 18:11:47 nvme_rpc -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:13:37.009 18:11:47 nvme_rpc -- common/autotest_common.sh@868 -- # return 0 00:13:37.009 18:11:47 nvme_rpc -- nvme/nvme_rpc.sh@21 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_nvme_attach_controller -b Nvme0 -t PCIe -a 0000:00:10.0 00:13:37.267 Nvme0n1 00:13:37.267 18:11:47 nvme_rpc -- nvme/nvme_rpc.sh@27 -- # '[' -f non_existing_file ']' 00:13:37.267 18:11:47 nvme_rpc -- nvme/nvme_rpc.sh@32 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_nvme_apply_firmware non_existing_file Nvme0n1 00:13:37.267 request: 00:13:37.267 { 00:13:37.267 "bdev_name": "Nvme0n1", 00:13:37.267 "filename": "non_existing_file", 00:13:37.267 "method": "bdev_nvme_apply_firmware", 00:13:37.267 "req_id": 1 00:13:37.267 } 00:13:37.267 Got JSON-RPC error response 00:13:37.267 response: 00:13:37.267 { 00:13:37.267 "code": -32603, 00:13:37.267 "message": "open file failed." 00:13:37.267 } 00:13:37.267 18:11:47 nvme_rpc -- nvme/nvme_rpc.sh@32 -- # rv=1 00:13:37.267 18:11:47 nvme_rpc -- nvme/nvme_rpc.sh@33 -- # '[' -z 1 ']' 00:13:37.267 18:11:47 nvme_rpc -- nvme/nvme_rpc.sh@37 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_nvme_detach_controller Nvme0 00:13:37.524 18:11:48 nvme_rpc -- nvme/nvme_rpc.sh@39 -- # trap - SIGINT SIGTERM EXIT 00:13:37.524 18:11:48 nvme_rpc -- nvme/nvme_rpc.sh@40 -- # killprocess 67111 00:13:37.524 18:11:48 nvme_rpc -- common/autotest_common.sh@954 -- # '[' -z 67111 ']' 00:13:37.524 18:11:48 nvme_rpc -- common/autotest_common.sh@958 -- # kill -0 67111 00:13:37.524 18:11:48 nvme_rpc -- common/autotest_common.sh@959 -- # uname 00:13:37.524 18:11:48 nvme_rpc -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:13:37.524 18:11:48 nvme_rpc -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 67111 00:13:37.782 18:11:48 nvme_rpc -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:13:37.782 18:11:48 nvme_rpc -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:13:37.782 18:11:48 nvme_rpc -- common/autotest_common.sh@972 -- # echo 'killing process with pid 67111' 00:13:37.782 killing process with pid 67111 00:13:37.782 18:11:48 nvme_rpc -- common/autotest_common.sh@973 -- # kill 67111 00:13:37.782 18:11:48 nvme_rpc -- common/autotest_common.sh@978 -- # wait 67111 00:13:40.312 00:13:40.312 real 0m4.753s 00:13:40.312 user 0m8.758s 00:13:40.312 sys 0m0.768s 00:13:40.312 18:11:50 nvme_rpc -- common/autotest_common.sh@1130 -- # xtrace_disable 00:13:40.312 18:11:50 nvme_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:40.312 ************************************ 00:13:40.312 END TEST nvme_rpc 00:13:40.312 ************************************ 00:13:40.312 18:11:50 -- spdk/autotest.sh@237 -- # run_test nvme_rpc_timeouts /home/vagrant/spdk_repo/spdk/test/nvme/nvme_rpc_timeouts.sh 00:13:40.312 18:11:50 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:13:40.312 18:11:50 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:13:40.312 18:11:50 -- common/autotest_common.sh@10 -- # set +x 00:13:40.312 ************************************ 00:13:40.312 START TEST nvme_rpc_timeouts 00:13:40.312 ************************************ 00:13:40.312 18:11:50 nvme_rpc_timeouts -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/nvme/nvme_rpc_timeouts.sh 00:13:40.312 * Looking for test storage... 00:13:40.312 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvme 00:13:40.312 18:11:50 nvme_rpc_timeouts -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:13:40.312 18:11:50 nvme_rpc_timeouts -- common/autotest_common.sh@1711 -- # lcov --version 00:13:40.312 18:11:50 nvme_rpc_timeouts -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:13:40.312 18:11:50 nvme_rpc_timeouts -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:13:40.313 18:11:50 nvme_rpc_timeouts -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:13:40.313 18:11:50 nvme_rpc_timeouts -- scripts/common.sh@333 -- # local ver1 ver1_l 00:13:40.313 18:11:50 nvme_rpc_timeouts -- scripts/common.sh@334 -- # local ver2 ver2_l 00:13:40.313 18:11:50 nvme_rpc_timeouts -- scripts/common.sh@336 -- # IFS=.-: 00:13:40.313 18:11:50 nvme_rpc_timeouts -- scripts/common.sh@336 -- # read -ra ver1 00:13:40.313 18:11:50 nvme_rpc_timeouts -- scripts/common.sh@337 -- # IFS=.-: 00:13:40.313 18:11:50 nvme_rpc_timeouts -- scripts/common.sh@337 -- # read -ra ver2 00:13:40.313 18:11:50 nvme_rpc_timeouts -- scripts/common.sh@338 -- # local 'op=<' 00:13:40.313 18:11:50 nvme_rpc_timeouts -- scripts/common.sh@340 -- # ver1_l=2 00:13:40.313 18:11:50 nvme_rpc_timeouts -- scripts/common.sh@341 -- # ver2_l=1 00:13:40.313 18:11:50 nvme_rpc_timeouts -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:13:40.313 18:11:50 nvme_rpc_timeouts -- scripts/common.sh@344 -- # case "$op" in 00:13:40.313 18:11:50 nvme_rpc_timeouts -- scripts/common.sh@345 -- # : 1 00:13:40.313 18:11:50 nvme_rpc_timeouts -- scripts/common.sh@364 -- # (( v = 0 )) 00:13:40.313 18:11:50 nvme_rpc_timeouts -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:13:40.313 18:11:50 nvme_rpc_timeouts -- scripts/common.sh@365 -- # decimal 1 00:13:40.313 18:11:50 nvme_rpc_timeouts -- scripts/common.sh@353 -- # local d=1 00:13:40.313 18:11:50 nvme_rpc_timeouts -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:13:40.313 18:11:50 nvme_rpc_timeouts -- scripts/common.sh@355 -- # echo 1 00:13:40.313 18:11:50 nvme_rpc_timeouts -- scripts/common.sh@365 -- # ver1[v]=1 00:13:40.313 18:11:50 nvme_rpc_timeouts -- scripts/common.sh@366 -- # decimal 2 00:13:40.313 18:11:50 nvme_rpc_timeouts -- scripts/common.sh@353 -- # local d=2 00:13:40.313 18:11:50 nvme_rpc_timeouts -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:13:40.313 18:11:50 nvme_rpc_timeouts -- scripts/common.sh@355 -- # echo 2 00:13:40.313 18:11:50 nvme_rpc_timeouts -- scripts/common.sh@366 -- # ver2[v]=2 00:13:40.313 18:11:50 nvme_rpc_timeouts -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:13:40.313 18:11:50 nvme_rpc_timeouts -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:13:40.313 18:11:50 nvme_rpc_timeouts -- scripts/common.sh@368 -- # return 0 00:13:40.313 18:11:50 nvme_rpc_timeouts -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:13:40.313 18:11:50 nvme_rpc_timeouts -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:13:40.313 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:13:40.313 --rc genhtml_branch_coverage=1 00:13:40.313 --rc genhtml_function_coverage=1 00:13:40.313 --rc genhtml_legend=1 00:13:40.313 --rc geninfo_all_blocks=1 00:13:40.313 --rc geninfo_unexecuted_blocks=1 00:13:40.313 00:13:40.313 ' 00:13:40.313 18:11:50 nvme_rpc_timeouts -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:13:40.313 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:13:40.313 --rc genhtml_branch_coverage=1 00:13:40.313 --rc genhtml_function_coverage=1 00:13:40.313 --rc genhtml_legend=1 00:13:40.313 --rc geninfo_all_blocks=1 00:13:40.313 --rc geninfo_unexecuted_blocks=1 00:13:40.313 00:13:40.313 ' 00:13:40.313 18:11:50 nvme_rpc_timeouts -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:13:40.313 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:13:40.313 --rc genhtml_branch_coverage=1 00:13:40.313 --rc genhtml_function_coverage=1 00:13:40.313 --rc genhtml_legend=1 00:13:40.313 --rc geninfo_all_blocks=1 00:13:40.313 --rc geninfo_unexecuted_blocks=1 00:13:40.313 00:13:40.313 ' 00:13:40.313 18:11:50 nvme_rpc_timeouts -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:13:40.313 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:13:40.313 --rc genhtml_branch_coverage=1 00:13:40.313 --rc genhtml_function_coverage=1 00:13:40.313 --rc genhtml_legend=1 00:13:40.313 --rc geninfo_all_blocks=1 00:13:40.313 --rc geninfo_unexecuted_blocks=1 00:13:40.313 00:13:40.313 ' 00:13:40.313 18:11:50 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@19 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:13:40.313 18:11:50 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@21 -- # tmpfile_default_settings=/tmp/settings_default_67187 00:13:40.313 18:11:50 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@22 -- # tmpfile_modified_settings=/tmp/settings_modified_67187 00:13:40.313 18:11:50 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@25 -- # spdk_tgt_pid=67219 00:13:40.313 18:11:50 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@24 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x3 00:13:40.313 18:11:50 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@26 -- # trap 'kill -9 ${spdk_tgt_pid}; rm -f ${tmpfile_default_settings} ${tmpfile_modified_settings} ; exit 1' SIGINT SIGTERM EXIT 00:13:40.313 18:11:50 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@27 -- # waitforlisten 67219 00:13:40.313 18:11:50 nvme_rpc_timeouts -- common/autotest_common.sh@835 -- # '[' -z 67219 ']' 00:13:40.313 18:11:50 nvme_rpc_timeouts -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:13:40.313 18:11:50 nvme_rpc_timeouts -- common/autotest_common.sh@840 -- # local max_retries=100 00:13:40.313 18:11:50 nvme_rpc_timeouts -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:13:40.313 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:13:40.313 18:11:50 nvme_rpc_timeouts -- common/autotest_common.sh@844 -- # xtrace_disable 00:13:40.313 18:11:50 nvme_rpc_timeouts -- common/autotest_common.sh@10 -- # set +x 00:13:40.313 [2024-12-06 18:11:50.845599] Starting SPDK v25.01-pre git sha1 1148849d6 / DPDK 24.03.0 initialization... 00:13:40.313 [2024-12-06 18:11:50.845944] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid67219 ] 00:13:40.573 [2024-12-06 18:11:51.054522] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:13:40.832 [2024-12-06 18:11:51.164764] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:13:40.832 [2024-12-06 18:11:51.164795] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:13:41.402 18:11:51 nvme_rpc_timeouts -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:13:41.402 18:11:51 nvme_rpc_timeouts -- common/autotest_common.sh@868 -- # return 0 00:13:41.402 Checking default timeout settings: 00:13:41.402 18:11:51 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@29 -- # echo Checking default timeout settings: 00:13:41.402 18:11:51 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@30 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py save_config 00:13:42.001 Making settings changes with rpc: 00:13:42.001 18:11:52 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@32 -- # echo Making settings changes with rpc: 00:13:42.001 18:11:52 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@34 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_nvme_set_options --timeout-us=12000000 --timeout-admin-us=24000000 --action-on-timeout=abort 00:13:42.001 Check default vs. modified settings: 00:13:42.001 18:11:52 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@36 -- # echo Check default vs. modified settings: 00:13:42.001 18:11:52 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@37 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py save_config 00:13:42.569 18:11:52 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@38 -- # settings_to_check='action_on_timeout timeout_us timeout_admin_us' 00:13:42.569 18:11:52 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@39 -- # for setting in $settings_to_check 00:13:42.569 18:11:52 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@40 -- # grep action_on_timeout /tmp/settings_default_67187 00:13:42.569 18:11:52 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@40 -- # awk '{print $2}' 00:13:42.569 18:11:52 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@40 -- # sed 's/[^a-zA-Z0-9]//g' 00:13:42.569 18:11:52 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@40 -- # setting_before=none 00:13:42.569 18:11:52 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@41 -- # grep action_on_timeout /tmp/settings_modified_67187 00:13:42.569 18:11:52 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@41 -- # awk '{print $2}' 00:13:42.569 18:11:52 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@41 -- # sed 's/[^a-zA-Z0-9]//g' 00:13:42.569 Setting action_on_timeout is changed as expected. 00:13:42.569 18:11:52 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@41 -- # setting_modified=abort 00:13:42.569 18:11:52 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@42 -- # '[' none == abort ']' 00:13:42.569 18:11:52 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@47 -- # echo Setting action_on_timeout is changed as expected. 00:13:42.569 18:11:52 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@39 -- # for setting in $settings_to_check 00:13:42.569 18:11:52 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@40 -- # grep timeout_us /tmp/settings_default_67187 00:13:42.569 18:11:52 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@40 -- # awk '{print $2}' 00:13:42.569 18:11:52 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@40 -- # sed 's/[^a-zA-Z0-9]//g' 00:13:42.569 18:11:52 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@40 -- # setting_before=0 00:13:42.569 18:11:52 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@41 -- # grep timeout_us /tmp/settings_modified_67187 00:13:42.569 18:11:52 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@41 -- # awk '{print $2}' 00:13:42.569 18:11:52 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@41 -- # sed 's/[^a-zA-Z0-9]//g' 00:13:42.569 Setting timeout_us is changed as expected. 00:13:42.569 18:11:52 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@41 -- # setting_modified=12000000 00:13:42.569 18:11:52 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@42 -- # '[' 0 == 12000000 ']' 00:13:42.569 18:11:52 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@47 -- # echo Setting timeout_us is changed as expected. 00:13:42.569 18:11:52 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@39 -- # for setting in $settings_to_check 00:13:42.569 18:11:52 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@40 -- # grep timeout_admin_us /tmp/settings_default_67187 00:13:42.569 18:11:52 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@40 -- # awk '{print $2}' 00:13:42.569 18:11:52 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@40 -- # sed 's/[^a-zA-Z0-9]//g' 00:13:42.569 18:11:52 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@40 -- # setting_before=0 00:13:42.569 18:11:52 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@41 -- # grep timeout_admin_us /tmp/settings_modified_67187 00:13:42.569 18:11:52 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@41 -- # awk '{print $2}' 00:13:42.569 18:11:52 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@41 -- # sed 's/[^a-zA-Z0-9]//g' 00:13:42.569 Setting timeout_admin_us is changed as expected. 00:13:42.569 18:11:52 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@41 -- # setting_modified=24000000 00:13:42.569 18:11:52 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@42 -- # '[' 0 == 24000000 ']' 00:13:42.569 18:11:52 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@47 -- # echo Setting timeout_admin_us is changed as expected. 00:13:42.569 18:11:52 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@52 -- # trap - SIGINT SIGTERM EXIT 00:13:42.569 18:11:52 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@53 -- # rm -f /tmp/settings_default_67187 /tmp/settings_modified_67187 00:13:42.569 18:11:52 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@54 -- # killprocess 67219 00:13:42.569 18:11:52 nvme_rpc_timeouts -- common/autotest_common.sh@954 -- # '[' -z 67219 ']' 00:13:42.569 18:11:52 nvme_rpc_timeouts -- common/autotest_common.sh@958 -- # kill -0 67219 00:13:42.569 18:11:52 nvme_rpc_timeouts -- common/autotest_common.sh@959 -- # uname 00:13:42.569 18:11:52 nvme_rpc_timeouts -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:13:42.569 18:11:52 nvme_rpc_timeouts -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 67219 00:13:42.569 killing process with pid 67219 00:13:42.569 18:11:52 nvme_rpc_timeouts -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:13:42.569 18:11:52 nvme_rpc_timeouts -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:13:42.569 18:11:52 nvme_rpc_timeouts -- common/autotest_common.sh@972 -- # echo 'killing process with pid 67219' 00:13:42.569 18:11:52 nvme_rpc_timeouts -- common/autotest_common.sh@973 -- # kill 67219 00:13:42.569 18:11:52 nvme_rpc_timeouts -- common/autotest_common.sh@978 -- # wait 67219 00:13:45.105 RPC TIMEOUT SETTING TEST PASSED. 00:13:45.105 18:11:55 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@56 -- # echo RPC TIMEOUT SETTING TEST PASSED. 00:13:45.105 ************************************ 00:13:45.105 END TEST nvme_rpc_timeouts 00:13:45.105 ************************************ 00:13:45.105 00:13:45.105 real 0m4.943s 00:13:45.105 user 0m9.306s 00:13:45.105 sys 0m0.839s 00:13:45.105 18:11:55 nvme_rpc_timeouts -- common/autotest_common.sh@1130 -- # xtrace_disable 00:13:45.105 18:11:55 nvme_rpc_timeouts -- common/autotest_common.sh@10 -- # set +x 00:13:45.105 18:11:55 -- spdk/autotest.sh@239 -- # uname -s 00:13:45.105 18:11:55 -- spdk/autotest.sh@239 -- # '[' Linux = Linux ']' 00:13:45.105 18:11:55 -- spdk/autotest.sh@240 -- # run_test sw_hotplug /home/vagrant/spdk_repo/spdk/test/nvme/sw_hotplug.sh 00:13:45.105 18:11:55 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:13:45.105 18:11:55 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:13:45.105 18:11:55 -- common/autotest_common.sh@10 -- # set +x 00:13:45.105 ************************************ 00:13:45.105 START TEST sw_hotplug 00:13:45.105 ************************************ 00:13:45.105 18:11:55 sw_hotplug -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/nvme/sw_hotplug.sh 00:13:45.105 * Looking for test storage... 00:13:45.105 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvme 00:13:45.105 18:11:55 sw_hotplug -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:13:45.105 18:11:55 sw_hotplug -- common/autotest_common.sh@1711 -- # lcov --version 00:13:45.105 18:11:55 sw_hotplug -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:13:45.365 18:11:55 sw_hotplug -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:13:45.365 18:11:55 sw_hotplug -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:13:45.365 18:11:55 sw_hotplug -- scripts/common.sh@333 -- # local ver1 ver1_l 00:13:45.365 18:11:55 sw_hotplug -- scripts/common.sh@334 -- # local ver2 ver2_l 00:13:45.365 18:11:55 sw_hotplug -- scripts/common.sh@336 -- # IFS=.-: 00:13:45.365 18:11:55 sw_hotplug -- scripts/common.sh@336 -- # read -ra ver1 00:13:45.365 18:11:55 sw_hotplug -- scripts/common.sh@337 -- # IFS=.-: 00:13:45.365 18:11:55 sw_hotplug -- scripts/common.sh@337 -- # read -ra ver2 00:13:45.365 18:11:55 sw_hotplug -- scripts/common.sh@338 -- # local 'op=<' 00:13:45.365 18:11:55 sw_hotplug -- scripts/common.sh@340 -- # ver1_l=2 00:13:45.365 18:11:55 sw_hotplug -- scripts/common.sh@341 -- # ver2_l=1 00:13:45.365 18:11:55 sw_hotplug -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:13:45.365 18:11:55 sw_hotplug -- scripts/common.sh@344 -- # case "$op" in 00:13:45.365 18:11:55 sw_hotplug -- scripts/common.sh@345 -- # : 1 00:13:45.365 18:11:55 sw_hotplug -- scripts/common.sh@364 -- # (( v = 0 )) 00:13:45.365 18:11:55 sw_hotplug -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:13:45.365 18:11:55 sw_hotplug -- scripts/common.sh@365 -- # decimal 1 00:13:45.365 18:11:55 sw_hotplug -- scripts/common.sh@353 -- # local d=1 00:13:45.365 18:11:55 sw_hotplug -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:13:45.365 18:11:55 sw_hotplug -- scripts/common.sh@355 -- # echo 1 00:13:45.365 18:11:55 sw_hotplug -- scripts/common.sh@365 -- # ver1[v]=1 00:13:45.365 18:11:55 sw_hotplug -- scripts/common.sh@366 -- # decimal 2 00:13:45.365 18:11:55 sw_hotplug -- scripts/common.sh@353 -- # local d=2 00:13:45.365 18:11:55 sw_hotplug -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:13:45.365 18:11:55 sw_hotplug -- scripts/common.sh@355 -- # echo 2 00:13:45.365 18:11:55 sw_hotplug -- scripts/common.sh@366 -- # ver2[v]=2 00:13:45.365 18:11:55 sw_hotplug -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:13:45.365 18:11:55 sw_hotplug -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:13:45.365 18:11:55 sw_hotplug -- scripts/common.sh@368 -- # return 0 00:13:45.365 18:11:55 sw_hotplug -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:13:45.365 18:11:55 sw_hotplug -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:13:45.365 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:13:45.365 --rc genhtml_branch_coverage=1 00:13:45.365 --rc genhtml_function_coverage=1 00:13:45.365 --rc genhtml_legend=1 00:13:45.365 --rc geninfo_all_blocks=1 00:13:45.365 --rc geninfo_unexecuted_blocks=1 00:13:45.365 00:13:45.365 ' 00:13:45.365 18:11:55 sw_hotplug -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:13:45.365 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:13:45.365 --rc genhtml_branch_coverage=1 00:13:45.365 --rc genhtml_function_coverage=1 00:13:45.365 --rc genhtml_legend=1 00:13:45.365 --rc geninfo_all_blocks=1 00:13:45.365 --rc geninfo_unexecuted_blocks=1 00:13:45.365 00:13:45.365 ' 00:13:45.365 18:11:55 sw_hotplug -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:13:45.365 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:13:45.365 --rc genhtml_branch_coverage=1 00:13:45.365 --rc genhtml_function_coverage=1 00:13:45.365 --rc genhtml_legend=1 00:13:45.365 --rc geninfo_all_blocks=1 00:13:45.365 --rc geninfo_unexecuted_blocks=1 00:13:45.365 00:13:45.365 ' 00:13:45.365 18:11:55 sw_hotplug -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:13:45.365 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:13:45.365 --rc genhtml_branch_coverage=1 00:13:45.365 --rc genhtml_function_coverage=1 00:13:45.365 --rc genhtml_legend=1 00:13:45.365 --rc geninfo_all_blocks=1 00:13:45.365 --rc geninfo_unexecuted_blocks=1 00:13:45.365 00:13:45.365 ' 00:13:45.365 18:11:55 sw_hotplug -- nvme/sw_hotplug.sh@129 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:13:45.935 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:13:45.935 0000:00:10.0 (1b36 0010): Already using the uio_pci_generic driver 00:13:45.935 0000:00:11.0 (1b36 0010): Already using the uio_pci_generic driver 00:13:45.935 0000:00:12.0 (1b36 0010): Already using the uio_pci_generic driver 00:13:45.935 0000:00:13.0 (1b36 0010): Already using the uio_pci_generic driver 00:13:46.195 18:11:56 sw_hotplug -- nvme/sw_hotplug.sh@131 -- # hotplug_wait=6 00:13:46.195 18:11:56 sw_hotplug -- nvme/sw_hotplug.sh@132 -- # hotplug_events=3 00:13:46.195 18:11:56 sw_hotplug -- nvme/sw_hotplug.sh@133 -- # nvmes=($(nvme_in_userspace)) 00:13:46.195 18:11:56 sw_hotplug -- nvme/sw_hotplug.sh@133 -- # nvme_in_userspace 00:13:46.195 18:11:56 sw_hotplug -- scripts/common.sh@312 -- # local bdf bdfs 00:13:46.195 18:11:56 sw_hotplug -- scripts/common.sh@313 -- # local nvmes 00:13:46.195 18:11:56 sw_hotplug -- scripts/common.sh@315 -- # [[ -n '' ]] 00:13:46.195 18:11:56 sw_hotplug -- scripts/common.sh@318 -- # nvmes=($(iter_pci_class_code 01 08 02)) 00:13:46.195 18:11:56 sw_hotplug -- scripts/common.sh@318 -- # iter_pci_class_code 01 08 02 00:13:46.195 18:11:56 sw_hotplug -- scripts/common.sh@298 -- # local bdf= 00:13:46.195 18:11:56 sw_hotplug -- scripts/common.sh@300 -- # iter_all_pci_class_code 01 08 02 00:13:46.195 18:11:56 sw_hotplug -- scripts/common.sh@233 -- # local class 00:13:46.195 18:11:56 sw_hotplug -- scripts/common.sh@234 -- # local subclass 00:13:46.195 18:11:56 sw_hotplug -- scripts/common.sh@235 -- # local progif 00:13:46.195 18:11:56 sw_hotplug -- scripts/common.sh@236 -- # printf %02x 1 00:13:46.195 18:11:56 sw_hotplug -- scripts/common.sh@236 -- # class=01 00:13:46.195 18:11:56 sw_hotplug -- scripts/common.sh@237 -- # printf %02x 8 00:13:46.195 18:11:56 sw_hotplug -- scripts/common.sh@237 -- # subclass=08 00:13:46.195 18:11:56 sw_hotplug -- scripts/common.sh@238 -- # printf %02x 2 00:13:46.195 18:11:56 sw_hotplug -- scripts/common.sh@238 -- # progif=02 00:13:46.195 18:11:56 sw_hotplug -- scripts/common.sh@240 -- # hash lspci 00:13:46.195 18:11:56 sw_hotplug -- scripts/common.sh@241 -- # '[' 02 '!=' 00 ']' 00:13:46.195 18:11:56 sw_hotplug -- scripts/common.sh@242 -- # lspci -mm -n -D 00:13:46.195 18:11:56 sw_hotplug -- scripts/common.sh@243 -- # grep -i -- -p02 00:13:46.195 18:11:56 sw_hotplug -- scripts/common.sh@244 -- # awk -v 'cc="0108"' -F ' ' '{if (cc ~ $2) print $1}' 00:13:46.195 18:11:56 sw_hotplug -- scripts/common.sh@245 -- # tr -d '"' 00:13:46.195 18:11:56 sw_hotplug -- scripts/common.sh@300 -- # for bdf in $(iter_all_pci_class_code "$@") 00:13:46.195 18:11:56 sw_hotplug -- scripts/common.sh@301 -- # pci_can_use 0000:00:10.0 00:13:46.195 18:11:56 sw_hotplug -- scripts/common.sh@18 -- # local i 00:13:46.195 18:11:56 sw_hotplug -- scripts/common.sh@21 -- # [[ =~ 0000:00:10.0 ]] 00:13:46.195 18:11:56 sw_hotplug -- scripts/common.sh@25 -- # [[ -z '' ]] 00:13:46.195 18:11:56 sw_hotplug -- scripts/common.sh@27 -- # return 0 00:13:46.195 18:11:56 sw_hotplug -- scripts/common.sh@302 -- # echo 0000:00:10.0 00:13:46.195 18:11:56 sw_hotplug -- scripts/common.sh@300 -- # for bdf in $(iter_all_pci_class_code "$@") 00:13:46.195 18:11:56 sw_hotplug -- scripts/common.sh@301 -- # pci_can_use 0000:00:11.0 00:13:46.195 18:11:56 sw_hotplug -- scripts/common.sh@18 -- # local i 00:13:46.195 18:11:56 sw_hotplug -- scripts/common.sh@21 -- # [[ =~ 0000:00:11.0 ]] 00:13:46.195 18:11:56 sw_hotplug -- scripts/common.sh@25 -- # [[ -z '' ]] 00:13:46.195 18:11:56 sw_hotplug -- scripts/common.sh@27 -- # return 0 00:13:46.195 18:11:56 sw_hotplug -- scripts/common.sh@302 -- # echo 0000:00:11.0 00:13:46.195 18:11:56 sw_hotplug -- scripts/common.sh@300 -- # for bdf in $(iter_all_pci_class_code "$@") 00:13:46.195 18:11:56 sw_hotplug -- scripts/common.sh@301 -- # pci_can_use 0000:00:12.0 00:13:46.195 18:11:56 sw_hotplug -- scripts/common.sh@18 -- # local i 00:13:46.195 18:11:56 sw_hotplug -- scripts/common.sh@21 -- # [[ =~ 0000:00:12.0 ]] 00:13:46.195 18:11:56 sw_hotplug -- scripts/common.sh@25 -- # [[ -z '' ]] 00:13:46.195 18:11:56 sw_hotplug -- scripts/common.sh@27 -- # return 0 00:13:46.195 18:11:56 sw_hotplug -- scripts/common.sh@302 -- # echo 0000:00:12.0 00:13:46.195 18:11:56 sw_hotplug -- scripts/common.sh@300 -- # for bdf in $(iter_all_pci_class_code "$@") 00:13:46.195 18:11:56 sw_hotplug -- scripts/common.sh@301 -- # pci_can_use 0000:00:13.0 00:13:46.195 18:11:56 sw_hotplug -- scripts/common.sh@18 -- # local i 00:13:46.195 18:11:56 sw_hotplug -- scripts/common.sh@21 -- # [[ =~ 0000:00:13.0 ]] 00:13:46.195 18:11:56 sw_hotplug -- scripts/common.sh@25 -- # [[ -z '' ]] 00:13:46.195 18:11:56 sw_hotplug -- scripts/common.sh@27 -- # return 0 00:13:46.195 18:11:56 sw_hotplug -- scripts/common.sh@302 -- # echo 0000:00:13.0 00:13:46.195 18:11:56 sw_hotplug -- scripts/common.sh@321 -- # for bdf in "${nvmes[@]}" 00:13:46.195 18:11:56 sw_hotplug -- scripts/common.sh@322 -- # [[ -e /sys/bus/pci/drivers/nvme/0000:00:10.0 ]] 00:13:46.195 18:11:56 sw_hotplug -- scripts/common.sh@323 -- # uname -s 00:13:46.195 18:11:56 sw_hotplug -- scripts/common.sh@323 -- # [[ Linux == FreeBSD ]] 00:13:46.195 18:11:56 sw_hotplug -- scripts/common.sh@326 -- # bdfs+=("$bdf") 00:13:46.195 18:11:56 sw_hotplug -- scripts/common.sh@321 -- # for bdf in "${nvmes[@]}" 00:13:46.195 18:11:56 sw_hotplug -- scripts/common.sh@322 -- # [[ -e /sys/bus/pci/drivers/nvme/0000:00:11.0 ]] 00:13:46.195 18:11:56 sw_hotplug -- scripts/common.sh@323 -- # uname -s 00:13:46.195 18:11:56 sw_hotplug -- scripts/common.sh@323 -- # [[ Linux == FreeBSD ]] 00:13:46.195 18:11:56 sw_hotplug -- scripts/common.sh@326 -- # bdfs+=("$bdf") 00:13:46.196 18:11:56 sw_hotplug -- scripts/common.sh@321 -- # for bdf in "${nvmes[@]}" 00:13:46.196 18:11:56 sw_hotplug -- scripts/common.sh@322 -- # [[ -e /sys/bus/pci/drivers/nvme/0000:00:12.0 ]] 00:13:46.196 18:11:56 sw_hotplug -- scripts/common.sh@323 -- # uname -s 00:13:46.196 18:11:56 sw_hotplug -- scripts/common.sh@323 -- # [[ Linux == FreeBSD ]] 00:13:46.196 18:11:56 sw_hotplug -- scripts/common.sh@326 -- # bdfs+=("$bdf") 00:13:46.196 18:11:56 sw_hotplug -- scripts/common.sh@321 -- # for bdf in "${nvmes[@]}" 00:13:46.196 18:11:56 sw_hotplug -- scripts/common.sh@322 -- # [[ -e /sys/bus/pci/drivers/nvme/0000:00:13.0 ]] 00:13:46.196 18:11:56 sw_hotplug -- scripts/common.sh@323 -- # uname -s 00:13:46.196 18:11:56 sw_hotplug -- scripts/common.sh@323 -- # [[ Linux == FreeBSD ]] 00:13:46.196 18:11:56 sw_hotplug -- scripts/common.sh@326 -- # bdfs+=("$bdf") 00:13:46.196 18:11:56 sw_hotplug -- scripts/common.sh@328 -- # (( 4 )) 00:13:46.196 18:11:56 sw_hotplug -- scripts/common.sh@329 -- # printf '%s\n' 0000:00:10.0 0000:00:11.0 0000:00:12.0 0000:00:13.0 00:13:46.196 18:11:56 sw_hotplug -- nvme/sw_hotplug.sh@134 -- # nvme_count=2 00:13:46.196 18:11:56 sw_hotplug -- nvme/sw_hotplug.sh@135 -- # nvmes=("${nvmes[@]::nvme_count}") 00:13:46.196 18:11:56 sw_hotplug -- nvme/sw_hotplug.sh@138 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:13:46.765 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:13:47.025 Waiting for block devices as requested 00:13:47.025 0000:00:11.0 (1b36 0010): uio_pci_generic -> nvme 00:13:47.329 0000:00:10.0 (1b36 0010): uio_pci_generic -> nvme 00:13:47.329 0000:00:12.0 (1b36 0010): uio_pci_generic -> nvme 00:13:47.329 0000:00:13.0 (1b36 0010): uio_pci_generic -> nvme 00:13:52.601 * Events for some block/disk devices (0000:00:13.0) were not caught, they may be missing 00:13:52.602 18:12:02 sw_hotplug -- nvme/sw_hotplug.sh@140 -- # PCI_ALLOWED='0000:00:10.0 0000:00:11.0' 00:13:52.602 18:12:02 sw_hotplug -- nvme/sw_hotplug.sh@140 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:13:53.168 0000:00:03.0 (1af4 1001): Skipping denied controller at 0000:00:03.0 00:13:53.168 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:13:53.168 0000:00:12.0 (1b36 0010): Skipping denied controller at 0000:00:12.0 00:13:53.738 0000:00:13.0 (1b36 0010): Skipping denied controller at 0000:00:13.0 00:13:54.015 0000:00:10.0 (1b36 0010): nvme -> uio_pci_generic 00:13:54.015 0000:00:11.0 (1b36 0010): nvme -> uio_pci_generic 00:13:54.015 18:12:04 sw_hotplug -- nvme/sw_hotplug.sh@143 -- # xtrace_disable 00:13:54.015 18:12:04 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:13:54.274 18:12:04 sw_hotplug -- nvme/sw_hotplug.sh@148 -- # run_hotplug 00:13:54.274 18:12:04 sw_hotplug -- nvme/sw_hotplug.sh@77 -- # trap 'killprocess $hotplug_pid; exit 1' SIGINT SIGTERM EXIT 00:13:54.274 18:12:04 sw_hotplug -- nvme/sw_hotplug.sh@85 -- # hotplug_pid=68113 00:13:54.274 18:12:04 sw_hotplug -- nvme/sw_hotplug.sh@80 -- # /home/vagrant/spdk_repo/spdk/build/examples/hotplug -i 0 -t 0 -n 6 -r 6 -l warning 00:13:54.274 18:12:04 sw_hotplug -- nvme/sw_hotplug.sh@87 -- # debug_remove_attach_helper 3 6 false 00:13:54.274 18:12:04 sw_hotplug -- nvme/sw_hotplug.sh@19 -- # local helper_time=0 00:13:54.274 18:12:04 sw_hotplug -- nvme/sw_hotplug.sh@21 -- # timing_cmd remove_attach_helper 3 6 false 00:13:54.274 18:12:04 sw_hotplug -- common/autotest_common.sh@709 -- # local cmd_es=0 00:13:54.274 18:12:04 sw_hotplug -- common/autotest_common.sh@711 -- # [[ -t 0 ]] 00:13:54.274 18:12:04 sw_hotplug -- common/autotest_common.sh@711 -- # exec 00:13:54.274 18:12:04 sw_hotplug -- common/autotest_common.sh@713 -- # local time=0 TIMEFORMAT=%2R 00:13:54.274 18:12:04 sw_hotplug -- common/autotest_common.sh@719 -- # remove_attach_helper 3 6 false 00:13:54.274 18:12:04 sw_hotplug -- nvme/sw_hotplug.sh@27 -- # local hotplug_events=3 00:13:54.274 18:12:04 sw_hotplug -- nvme/sw_hotplug.sh@28 -- # local hotplug_wait=6 00:13:54.274 18:12:04 sw_hotplug -- nvme/sw_hotplug.sh@29 -- # local use_bdev=false 00:13:54.274 18:12:04 sw_hotplug -- nvme/sw_hotplug.sh@30 -- # local dev bdfs 00:13:54.274 18:12:04 sw_hotplug -- nvme/sw_hotplug.sh@36 -- # sleep 6 00:13:54.533 Initializing NVMe Controllers 00:13:54.533 Attaching to 0000:00:10.0 00:13:54.533 Attaching to 0000:00:11.0 00:13:54.533 Attached to 0000:00:10.0 00:13:54.533 Attached to 0000:00:11.0 00:13:54.533 Initialization complete. Starting I/O... 00:13:54.533 QEMU NVMe Ctrl (12340 ): 0 I/Os completed (+0) 00:13:54.533 QEMU NVMe Ctrl (12341 ): 0 I/Os completed (+0) 00:13:54.533 00:13:55.465 QEMU NVMe Ctrl (12340 ): 1496 I/Os completed (+1496) 00:13:55.466 QEMU NVMe Ctrl (12341 ): 1503 I/Os completed (+1503) 00:13:55.466 00:13:56.400 QEMU NVMe Ctrl (12340 ): 3516 I/Os completed (+2020) 00:13:56.400 QEMU NVMe Ctrl (12341 ): 3525 I/Os completed (+2022) 00:13:56.400 00:13:57.337 QEMU NVMe Ctrl (12340 ): 5672 I/Os completed (+2156) 00:13:57.337 QEMU NVMe Ctrl (12341 ): 5682 I/Os completed (+2157) 00:13:57.337 00:13:58.720 QEMU NVMe Ctrl (12340 ): 7832 I/Os completed (+2160) 00:13:58.720 QEMU NVMe Ctrl (12341 ): 7842 I/Os completed (+2160) 00:13:58.720 00:13:59.289 QEMU NVMe Ctrl (12340 ): 9976 I/Os completed (+2144) 00:13:59.289 QEMU NVMe Ctrl (12341 ): 9986 I/Os completed (+2144) 00:13:59.289 00:14:00.225 18:12:10 sw_hotplug -- nvme/sw_hotplug.sh@38 -- # (( hotplug_events-- )) 00:14:00.225 18:12:10 sw_hotplug -- nvme/sw_hotplug.sh@39 -- # for dev in "${nvmes[@]}" 00:14:00.225 18:12:10 sw_hotplug -- nvme/sw_hotplug.sh@40 -- # echo 1 00:14:00.225 [2024-12-06 18:12:10.630886] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [0000:00:10.0, 0] in failed state. 00:14:00.225 Controller removed: QEMU NVMe Ctrl (12340 ) 00:14:00.225 [2024-12-06 18:12:10.632617] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:14:00.225 [2024-12-06 18:12:10.632676] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:14:00.225 [2024-12-06 18:12:10.632697] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:14:00.225 [2024-12-06 18:12:10.632718] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:14:00.225 unregister_dev: QEMU NVMe Ctrl (12340 ) 00:14:00.225 [2024-12-06 18:12:10.635999] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:14:00.225 [2024-12-06 18:12:10.636051] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:14:00.225 [2024-12-06 18:12:10.636069] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:14:00.225 [2024-12-06 18:12:10.636088] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:14:00.225 18:12:10 sw_hotplug -- nvme/sw_hotplug.sh@39 -- # for dev in "${nvmes[@]}" 00:14:00.225 18:12:10 sw_hotplug -- nvme/sw_hotplug.sh@40 -- # echo 1 00:14:00.225 [2024-12-06 18:12:10.672616] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [0000:00:11.0, 0] in failed state. 00:14:00.225 Controller removed: QEMU NVMe Ctrl (12341 ) 00:14:00.225 [2024-12-06 18:12:10.674165] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:14:00.225 [2024-12-06 18:12:10.674207] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:14:00.225 [2024-12-06 18:12:10.674235] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:14:00.225 [2024-12-06 18:12:10.674258] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:14:00.225 unregister_dev: QEMU NVMe Ctrl (12341 ) 00:14:00.225 [2024-12-06 18:12:10.676799] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:14:00.225 [2024-12-06 18:12:10.676838] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:14:00.225 [2024-12-06 18:12:10.676858] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:14:00.225 [2024-12-06 18:12:10.676877] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:14:00.225 EAL: eal_parse_sysfs_value(): cannot open sysfs value /sys/bus/pci/devices/0000:00:11.0/vendor 00:14:00.225 EAL: Scan for (pci) bus failed. 00:14:00.225 18:12:10 sw_hotplug -- nvme/sw_hotplug.sh@43 -- # false 00:14:00.225 18:12:10 sw_hotplug -- nvme/sw_hotplug.sh@56 -- # echo 1 00:14:00.225 18:12:10 sw_hotplug -- nvme/sw_hotplug.sh@58 -- # for dev in "${nvmes[@]}" 00:14:00.225 18:12:10 sw_hotplug -- nvme/sw_hotplug.sh@59 -- # echo uio_pci_generic 00:14:00.225 18:12:10 sw_hotplug -- nvme/sw_hotplug.sh@60 -- # echo 0000:00:10.0 00:14:00.483 00:14:00.483 18:12:10 sw_hotplug -- nvme/sw_hotplug.sh@61 -- # echo 0000:00:10.0 00:14:00.483 18:12:10 sw_hotplug -- nvme/sw_hotplug.sh@62 -- # echo '' 00:14:00.483 18:12:10 sw_hotplug -- nvme/sw_hotplug.sh@58 -- # for dev in "${nvmes[@]}" 00:14:00.483 18:12:10 sw_hotplug -- nvme/sw_hotplug.sh@59 -- # echo uio_pci_generic 00:14:00.483 18:12:10 sw_hotplug -- nvme/sw_hotplug.sh@60 -- # echo 0000:00:11.0 00:14:00.483 Attaching to 0000:00:10.0 00:14:00.483 Attached to 0000:00:10.0 00:14:00.483 18:12:10 sw_hotplug -- nvme/sw_hotplug.sh@61 -- # echo 0000:00:11.0 00:14:00.483 18:12:11 sw_hotplug -- nvme/sw_hotplug.sh@62 -- # echo '' 00:14:00.483 18:12:11 sw_hotplug -- nvme/sw_hotplug.sh@66 -- # sleep 12 00:14:00.483 Attaching to 0000:00:11.0 00:14:00.483 Attached to 0000:00:11.0 00:14:01.419 QEMU NVMe Ctrl (12340 ): 2076 I/Os completed (+2076) 00:14:01.419 QEMU NVMe Ctrl (12341 ): 1808 I/Os completed (+1808) 00:14:01.419 00:14:02.357 QEMU NVMe Ctrl (12340 ): 4244 I/Os completed (+2168) 00:14:02.357 QEMU NVMe Ctrl (12341 ): 3976 I/Os completed (+2168) 00:14:02.357 00:14:03.292 QEMU NVMe Ctrl (12340 ): 6204 I/Os completed (+1960) 00:14:03.292 QEMU NVMe Ctrl (12341 ): 5940 I/Os completed (+1964) 00:14:03.292 00:14:04.666 QEMU NVMe Ctrl (12340 ): 8240 I/Os completed (+2036) 00:14:04.666 QEMU NVMe Ctrl (12341 ): 7976 I/Os completed (+2036) 00:14:04.666 00:14:05.619 QEMU NVMe Ctrl (12340 ): 10172 I/Os completed (+1932) 00:14:05.619 QEMU NVMe Ctrl (12341 ): 9908 I/Os completed (+1932) 00:14:05.619 00:14:06.552 QEMU NVMe Ctrl (12340 ): 12160 I/Os completed (+1988) 00:14:06.553 QEMU NVMe Ctrl (12341 ): 11896 I/Os completed (+1988) 00:14:06.553 00:14:07.490 QEMU NVMe Ctrl (12340 ): 14184 I/Os completed (+2024) 00:14:07.490 QEMU NVMe Ctrl (12341 ): 13920 I/Os completed (+2024) 00:14:07.490 00:14:08.426 QEMU NVMe Ctrl (12340 ): 16144 I/Os completed (+1960) 00:14:08.426 QEMU NVMe Ctrl (12341 ): 15880 I/Os completed (+1960) 00:14:08.426 00:14:09.361 QEMU NVMe Ctrl (12340 ): 18121 I/Os completed (+1977) 00:14:09.361 QEMU NVMe Ctrl (12341 ): 17860 I/Os completed (+1980) 00:14:09.361 00:14:10.297 QEMU NVMe Ctrl (12340 ): 20067 I/Os completed (+1946) 00:14:10.297 QEMU NVMe Ctrl (12341 ): 19807 I/Os completed (+1947) 00:14:10.297 00:14:11.716 QEMU NVMe Ctrl (12340 ): 22006 I/Os completed (+1939) 00:14:11.716 QEMU NVMe Ctrl (12341 ): 21744 I/Os completed (+1937) 00:14:11.716 00:14:12.303 QEMU NVMe Ctrl (12340 ): 23913 I/Os completed (+1907) 00:14:12.303 QEMU NVMe Ctrl (12341 ): 23650 I/Os completed (+1906) 00:14:12.303 00:14:12.562 18:12:23 sw_hotplug -- nvme/sw_hotplug.sh@68 -- # false 00:14:12.562 18:12:23 sw_hotplug -- nvme/sw_hotplug.sh@38 -- # (( hotplug_events-- )) 00:14:12.562 18:12:23 sw_hotplug -- nvme/sw_hotplug.sh@39 -- # for dev in "${nvmes[@]}" 00:14:12.562 18:12:23 sw_hotplug -- nvme/sw_hotplug.sh@40 -- # echo 1 00:14:12.562 [2024-12-06 18:12:23.032298] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [0000:00:10.0, 0] in failed state. 00:14:12.562 Controller removed: QEMU NVMe Ctrl (12340 ) 00:14:12.562 [2024-12-06 18:12:23.036573] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:14:12.562 [2024-12-06 18:12:23.036683] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:14:12.562 [2024-12-06 18:12:23.036736] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:14:12.562 [2024-12-06 18:12:23.036789] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:14:12.562 unregister_dev: QEMU NVMe Ctrl (12340 ) 00:14:12.562 [2024-12-06 18:12:23.042439] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:14:12.562 [2024-12-06 18:12:23.042524] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:14:12.562 [2024-12-06 18:12:23.042559] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:14:12.562 [2024-12-06 18:12:23.042591] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:14:12.562 18:12:23 sw_hotplug -- nvme/sw_hotplug.sh@39 -- # for dev in "${nvmes[@]}" 00:14:12.562 18:12:23 sw_hotplug -- nvme/sw_hotplug.sh@40 -- # echo 1 00:14:12.562 [2024-12-06 18:12:23.068671] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [0000:00:11.0, 0] in failed state. 00:14:12.563 Controller removed: QEMU NVMe Ctrl (12341 ) 00:14:12.563 [2024-12-06 18:12:23.070454] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:14:12.563 [2024-12-06 18:12:23.070507] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:14:12.563 [2024-12-06 18:12:23.070536] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:14:12.563 [2024-12-06 18:12:23.070558] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:14:12.563 unregister_dev: QEMU NVMe Ctrl (12341 ) 00:14:12.563 [2024-12-06 18:12:23.073360] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:14:12.563 [2024-12-06 18:12:23.073407] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:14:12.563 [2024-12-06 18:12:23.073430] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:14:12.563 [2024-12-06 18:12:23.073451] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:14:12.563 18:12:23 sw_hotplug -- nvme/sw_hotplug.sh@43 -- # false 00:14:12.563 18:12:23 sw_hotplug -- nvme/sw_hotplug.sh@56 -- # echo 1 00:14:12.563 EAL: eal_parse_sysfs_value(): cannot open sysfs value /sys/bus/pci/devices/0000:00:11.0/vendor 00:14:12.563 EAL: Scan for (pci) bus failed. 00:14:12.822 18:12:23 sw_hotplug -- nvme/sw_hotplug.sh@58 -- # for dev in "${nvmes[@]}" 00:14:12.822 18:12:23 sw_hotplug -- nvme/sw_hotplug.sh@59 -- # echo uio_pci_generic 00:14:12.822 18:12:23 sw_hotplug -- nvme/sw_hotplug.sh@60 -- # echo 0000:00:10.0 00:14:12.822 18:12:23 sw_hotplug -- nvme/sw_hotplug.sh@61 -- # echo 0000:00:10.0 00:14:12.822 18:12:23 sw_hotplug -- nvme/sw_hotplug.sh@62 -- # echo '' 00:14:12.822 18:12:23 sw_hotplug -- nvme/sw_hotplug.sh@58 -- # for dev in "${nvmes[@]}" 00:14:12.822 18:12:23 sw_hotplug -- nvme/sw_hotplug.sh@59 -- # echo uio_pci_generic 00:14:12.822 18:12:23 sw_hotplug -- nvme/sw_hotplug.sh@60 -- # echo 0000:00:11.0 00:14:12.822 Attaching to 0000:00:10.0 00:14:12.822 Attached to 0000:00:10.0 00:14:13.082 18:12:23 sw_hotplug -- nvme/sw_hotplug.sh@61 -- # echo 0000:00:11.0 00:14:13.082 18:12:23 sw_hotplug -- nvme/sw_hotplug.sh@62 -- # echo '' 00:14:13.082 18:12:23 sw_hotplug -- nvme/sw_hotplug.sh@66 -- # sleep 12 00:14:13.082 Attaching to 0000:00:11.0 00:14:13.082 Attached to 0000:00:11.0 00:14:13.341 QEMU NVMe Ctrl (12340 ): 1032 I/Os completed (+1032) 00:14:13.341 QEMU NVMe Ctrl (12341 ): 824 I/Os completed (+824) 00:14:13.341 00:14:14.279 QEMU NVMe Ctrl (12340 ): 3088 I/Os completed (+2056) 00:14:14.279 QEMU NVMe Ctrl (12341 ): 2880 I/Os completed (+2056) 00:14:14.279 00:14:15.652 QEMU NVMe Ctrl (12340 ): 4996 I/Os completed (+1908) 00:14:15.652 QEMU NVMe Ctrl (12341 ): 4791 I/Os completed (+1911) 00:14:15.652 00:14:16.588 QEMU NVMe Ctrl (12340 ): 6909 I/Os completed (+1913) 00:14:16.588 QEMU NVMe Ctrl (12341 ): 6731 I/Os completed (+1940) 00:14:16.588 00:14:17.524 QEMU NVMe Ctrl (12340 ): 9001 I/Os completed (+2092) 00:14:17.524 QEMU NVMe Ctrl (12341 ): 8832 I/Os completed (+2101) 00:14:17.524 00:14:18.458 QEMU NVMe Ctrl (12340 ): 11201 I/Os completed (+2200) 00:14:18.458 QEMU NVMe Ctrl (12341 ): 11032 I/Os completed (+2200) 00:14:18.458 00:14:19.395 QEMU NVMe Ctrl (12340 ): 13353 I/Os completed (+2152) 00:14:19.395 QEMU NVMe Ctrl (12341 ): 13186 I/Os completed (+2154) 00:14:19.395 00:14:20.332 QEMU NVMe Ctrl (12340 ): 15513 I/Os completed (+2160) 00:14:20.332 QEMU NVMe Ctrl (12341 ): 15346 I/Os completed (+2160) 00:14:20.332 00:14:21.269 QEMU NVMe Ctrl (12340 ): 17677 I/Os completed (+2164) 00:14:21.269 QEMU NVMe Ctrl (12341 ): 17510 I/Os completed (+2164) 00:14:21.269 00:14:22.644 QEMU NVMe Ctrl (12340 ): 19909 I/Os completed (+2232) 00:14:22.644 QEMU NVMe Ctrl (12341 ): 19742 I/Os completed (+2232) 00:14:22.644 00:14:23.577 QEMU NVMe Ctrl (12340 ): 22065 I/Os completed (+2156) 00:14:23.577 QEMU NVMe Ctrl (12341 ): 21898 I/Os completed (+2156) 00:14:23.577 00:14:24.507 QEMU NVMe Ctrl (12340 ): 24193 I/Os completed (+2128) 00:14:24.507 QEMU NVMe Ctrl (12341 ): 24031 I/Os completed (+2133) 00:14:24.507 00:14:25.102 18:12:35 sw_hotplug -- nvme/sw_hotplug.sh@68 -- # false 00:14:25.102 18:12:35 sw_hotplug -- nvme/sw_hotplug.sh@38 -- # (( hotplug_events-- )) 00:14:25.102 18:12:35 sw_hotplug -- nvme/sw_hotplug.sh@39 -- # for dev in "${nvmes[@]}" 00:14:25.102 18:12:35 sw_hotplug -- nvme/sw_hotplug.sh@40 -- # echo 1 00:14:25.102 [2024-12-06 18:12:35.446687] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [0000:00:10.0, 0] in failed state. 00:14:25.102 Controller removed: QEMU NVMe Ctrl (12340 ) 00:14:25.102 [2024-12-06 18:12:35.448692] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:14:25.102 [2024-12-06 18:12:35.448858] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:14:25.102 [2024-12-06 18:12:35.448910] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:14:25.103 [2024-12-06 18:12:35.449013] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:14:25.103 unregister_dev: QEMU NVMe Ctrl (12340 ) 00:14:25.103 [2024-12-06 18:12:35.454241] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:14:25.103 [2024-12-06 18:12:35.454388] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:14:25.103 [2024-12-06 18:12:35.454416] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:14:25.103 [2024-12-06 18:12:35.454436] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:14:25.103 18:12:35 sw_hotplug -- nvme/sw_hotplug.sh@39 -- # for dev in "${nvmes[@]}" 00:14:25.103 18:12:35 sw_hotplug -- nvme/sw_hotplug.sh@40 -- # echo 1 00:14:25.103 [2024-12-06 18:12:35.484238] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [0000:00:11.0, 0] in failed state. 00:14:25.103 Controller removed: QEMU NVMe Ctrl (12341 ) 00:14:25.103 [2024-12-06 18:12:35.485948] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:14:25.103 [2024-12-06 18:12:35.486009] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:14:25.103 [2024-12-06 18:12:35.486034] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:14:25.103 [2024-12-06 18:12:35.486056] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:14:25.103 unregister_dev: QEMU NVMe Ctrl (12341 ) 00:14:25.103 [2024-12-06 18:12:35.488756] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:14:25.103 [2024-12-06 18:12:35.488802] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:14:25.103 [2024-12-06 18:12:35.488827] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:14:25.103 [2024-12-06 18:12:35.488844] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:14:25.103 18:12:35 sw_hotplug -- nvme/sw_hotplug.sh@43 -- # false 00:14:25.103 18:12:35 sw_hotplug -- nvme/sw_hotplug.sh@56 -- # echo 1 00:14:25.103 18:12:35 sw_hotplug -- nvme/sw_hotplug.sh@58 -- # for dev in "${nvmes[@]}" 00:14:25.103 18:12:35 sw_hotplug -- nvme/sw_hotplug.sh@59 -- # echo uio_pci_generic 00:14:25.103 18:12:35 sw_hotplug -- nvme/sw_hotplug.sh@60 -- # echo 0000:00:10.0 00:14:25.360 18:12:35 sw_hotplug -- nvme/sw_hotplug.sh@61 -- # echo 0000:00:10.0 00:14:25.360 18:12:35 sw_hotplug -- nvme/sw_hotplug.sh@62 -- # echo '' 00:14:25.360 18:12:35 sw_hotplug -- nvme/sw_hotplug.sh@58 -- # for dev in "${nvmes[@]}" 00:14:25.360 18:12:35 sw_hotplug -- nvme/sw_hotplug.sh@59 -- # echo uio_pci_generic 00:14:25.360 18:12:35 sw_hotplug -- nvme/sw_hotplug.sh@60 -- # echo 0000:00:11.0 00:14:25.360 Attaching to 0000:00:10.0 00:14:25.360 Attached to 0000:00:10.0 00:14:25.360 18:12:35 sw_hotplug -- nvme/sw_hotplug.sh@61 -- # echo 0000:00:11.0 00:14:25.360 18:12:35 sw_hotplug -- nvme/sw_hotplug.sh@62 -- # echo '' 00:14:25.360 18:12:35 sw_hotplug -- nvme/sw_hotplug.sh@66 -- # sleep 12 00:14:25.360 Attaching to 0000:00:11.0 00:14:25.360 Attached to 0000:00:11.0 00:14:25.360 unregister_dev: QEMU NVMe Ctrl (12340 ) 00:14:25.360 unregister_dev: QEMU NVMe Ctrl (12341 ) 00:14:25.360 [2024-12-06 18:12:35.822934] rpc.c: 409:spdk_rpc_close: *WARNING*: spdk_rpc_close: deprecated feature spdk_rpc_close is deprecated to be removed in v24.09 00:14:37.560 18:12:47 sw_hotplug -- nvme/sw_hotplug.sh@68 -- # false 00:14:37.560 18:12:47 sw_hotplug -- nvme/sw_hotplug.sh@38 -- # (( hotplug_events-- )) 00:14:37.560 18:12:47 sw_hotplug -- common/autotest_common.sh@719 -- # time=43.19 00:14:37.560 18:12:47 sw_hotplug -- common/autotest_common.sh@720 -- # echo 43.19 00:14:37.560 18:12:47 sw_hotplug -- common/autotest_common.sh@722 -- # return 0 00:14:37.560 18:12:47 sw_hotplug -- nvme/sw_hotplug.sh@21 -- # helper_time=43.19 00:14:37.560 18:12:47 sw_hotplug -- nvme/sw_hotplug.sh@22 -- # printf 'remove_attach_helper took %ss to complete (handling %u nvme drive(s))' 43.19 2 00:14:37.560 remove_attach_helper took 43.19s to complete (handling 2 nvme drive(s)) 18:12:47 sw_hotplug -- nvme/sw_hotplug.sh@91 -- # sleep 6 00:14:44.160 18:12:53 sw_hotplug -- nvme/sw_hotplug.sh@93 -- # kill -0 68113 00:14:44.160 /home/vagrant/spdk_repo/spdk/test/nvme/sw_hotplug.sh: line 93: kill: (68113) - No such process 00:14:44.160 18:12:53 sw_hotplug -- nvme/sw_hotplug.sh@95 -- # wait 68113 00:14:44.161 18:12:53 sw_hotplug -- nvme/sw_hotplug.sh@102 -- # trap - SIGINT SIGTERM EXIT 00:14:44.161 18:12:53 sw_hotplug -- nvme/sw_hotplug.sh@151 -- # tgt_run_hotplug 00:14:44.161 18:12:53 sw_hotplug -- nvme/sw_hotplug.sh@107 -- # local dev 00:14:44.161 18:12:53 sw_hotplug -- nvme/sw_hotplug.sh@110 -- # spdk_tgt_pid=68655 00:14:44.161 18:12:53 sw_hotplug -- nvme/sw_hotplug.sh@109 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:14:44.161 18:12:53 sw_hotplug -- nvme/sw_hotplug.sh@112 -- # trap 'killprocess ${spdk_tgt_pid}; echo 1 > /sys/bus/pci/rescan; exit 1' SIGINT SIGTERM EXIT 00:14:44.161 18:12:53 sw_hotplug -- nvme/sw_hotplug.sh@113 -- # waitforlisten 68655 00:14:44.161 18:12:53 sw_hotplug -- common/autotest_common.sh@835 -- # '[' -z 68655 ']' 00:14:44.161 18:12:53 sw_hotplug -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:14:44.161 18:12:53 sw_hotplug -- common/autotest_common.sh@840 -- # local max_retries=100 00:14:44.161 18:12:53 sw_hotplug -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:14:44.161 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:14:44.161 18:12:53 sw_hotplug -- common/autotest_common.sh@844 -- # xtrace_disable 00:14:44.161 18:12:53 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:14:44.161 [2024-12-06 18:12:53.934794] Starting SPDK v25.01-pre git sha1 1148849d6 / DPDK 24.03.0 initialization... 00:14:44.161 [2024-12-06 18:12:53.935105] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid68655 ] 00:14:44.161 [2024-12-06 18:12:54.106049] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:14:44.161 [2024-12-06 18:12:54.244212] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:14:44.728 18:12:55 sw_hotplug -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:14:44.728 18:12:55 sw_hotplug -- common/autotest_common.sh@868 -- # return 0 00:14:44.728 18:12:55 sw_hotplug -- nvme/sw_hotplug.sh@115 -- # rpc_cmd bdev_nvme_set_hotplug -e 00:14:44.728 18:12:55 sw_hotplug -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:44.728 18:12:55 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:14:44.728 18:12:55 sw_hotplug -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:44.728 18:12:55 sw_hotplug -- nvme/sw_hotplug.sh@117 -- # debug_remove_attach_helper 3 6 true 00:14:44.728 18:12:55 sw_hotplug -- nvme/sw_hotplug.sh@19 -- # local helper_time=0 00:14:44.728 18:12:55 sw_hotplug -- nvme/sw_hotplug.sh@21 -- # timing_cmd remove_attach_helper 3 6 true 00:14:44.728 18:12:55 sw_hotplug -- common/autotest_common.sh@709 -- # local cmd_es=0 00:14:44.728 18:12:55 sw_hotplug -- common/autotest_common.sh@711 -- # [[ -t 0 ]] 00:14:44.728 18:12:55 sw_hotplug -- common/autotest_common.sh@711 -- # exec 00:14:44.728 18:12:55 sw_hotplug -- common/autotest_common.sh@713 -- # local time=0 TIMEFORMAT=%2R 00:14:44.728 18:12:55 sw_hotplug -- common/autotest_common.sh@719 -- # remove_attach_helper 3 6 true 00:14:44.728 18:12:55 sw_hotplug -- nvme/sw_hotplug.sh@27 -- # local hotplug_events=3 00:14:44.728 18:12:55 sw_hotplug -- nvme/sw_hotplug.sh@28 -- # local hotplug_wait=6 00:14:44.728 18:12:55 sw_hotplug -- nvme/sw_hotplug.sh@29 -- # local use_bdev=true 00:14:44.728 18:12:55 sw_hotplug -- nvme/sw_hotplug.sh@30 -- # local dev bdfs 00:14:44.728 18:12:55 sw_hotplug -- nvme/sw_hotplug.sh@36 -- # sleep 6 00:14:51.296 18:13:01 sw_hotplug -- nvme/sw_hotplug.sh@38 -- # (( hotplug_events-- )) 00:14:51.296 18:13:01 sw_hotplug -- nvme/sw_hotplug.sh@39 -- # for dev in "${nvmes[@]}" 00:14:51.296 18:13:01 sw_hotplug -- nvme/sw_hotplug.sh@40 -- # echo 1 00:14:51.296 18:13:01 sw_hotplug -- nvme/sw_hotplug.sh@39 -- # for dev in "${nvmes[@]}" 00:14:51.296 18:13:01 sw_hotplug -- nvme/sw_hotplug.sh@40 -- # echo 1 00:14:51.296 18:13:01 sw_hotplug -- nvme/sw_hotplug.sh@43 -- # true 00:14:51.296 18:13:01 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdfs=($(bdev_bdfs)) 00:14:51.296 18:13:01 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdev_bdfs 00:14:51.296 18:13:01 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # jq -r '.[].driver_specific.nvme[].pci_address' /dev/fd/63 00:14:51.296 18:13:01 sw_hotplug -- nvme/sw_hotplug.sh@13 -- # sort -u 00:14:51.296 18:13:01 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # rpc_cmd bdev_get_bdevs 00:14:51.296 18:13:01 sw_hotplug -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:51.296 18:13:01 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:14:51.296 [2024-12-06 18:13:01.256833] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [0000:00:10.0, 0] in failed state. 00:14:51.296 [2024-12-06 18:13:01.259524] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:14:51.296 [2024-12-06 18:13:01.259581] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:190 nsid:0 cdw10:00000000 cdw11:00000000 00:14:51.296 [2024-12-06 18:13:01.259604] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:190 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:51.296 [2024-12-06 18:13:01.259673] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:14:51.296 [2024-12-06 18:13:01.259689] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:189 nsid:0 cdw10:00000000 cdw11:00000000 00:14:51.296 [2024-12-06 18:13:01.259708] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:189 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:51.296 [2024-12-06 18:13:01.259722] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:14:51.296 [2024-12-06 18:13:01.259737] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:188 nsid:0 cdw10:00000000 cdw11:00000000 00:14:51.296 [2024-12-06 18:13:01.259750] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:188 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:51.296 [2024-12-06 18:13:01.259769] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:14:51.296 [2024-12-06 18:13:01.259781] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:187 nsid:0 cdw10:00000000 cdw11:00000000 00:14:51.296 [2024-12-06 18:13:01.259796] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:187 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:51.296 18:13:01 sw_hotplug -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:51.296 18:13:01 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # (( 1 > 0 )) 00:14:51.296 18:13:01 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # sleep 0.5 00:14:51.296 [2024-12-06 18:13:01.755989] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [0000:00:11.0, 0] in failed state. 00:14:51.296 [2024-12-06 18:13:01.759234] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:14:51.296 [2024-12-06 18:13:01.759317] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:190 nsid:0 cdw10:00000000 cdw11:00000000 00:14:51.296 [2024-12-06 18:13:01.759343] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:190 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:51.296 [2024-12-06 18:13:01.759371] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:14:51.296 [2024-12-06 18:13:01.759390] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:189 nsid:0 cdw10:00000000 cdw11:00000000 00:14:51.296 [2024-12-06 18:13:01.759405] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:189 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:51.296 [2024-12-06 18:13:01.759437] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:14:51.296 [2024-12-06 18:13:01.759451] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:188 nsid:0 cdw10:00000000 cdw11:00000000 00:14:51.296 [2024-12-06 18:13:01.759468] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:188 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:51.296 [2024-12-06 18:13:01.759482] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:14:51.296 [2024-12-06 18:13:01.759498] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:187 nsid:0 cdw10:00000000 cdw11:00000000 00:14:51.296 [2024-12-06 18:13:01.759512] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:187 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:51.296 18:13:01 sw_hotplug -- nvme/sw_hotplug.sh@51 -- # printf 'Still waiting for %s to be gone\n' 0000:00:11.0 00:14:51.296 18:13:01 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdfs=($(bdev_bdfs)) 00:14:51.296 18:13:01 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdev_bdfs 00:14:51.296 18:13:01 sw_hotplug -- nvme/sw_hotplug.sh@13 -- # sort -u 00:14:51.296 18:13:01 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # rpc_cmd bdev_get_bdevs 00:14:51.296 18:13:01 sw_hotplug -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:51.296 18:13:01 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:14:51.296 18:13:01 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # jq -r '.[].driver_specific.nvme[].pci_address' /dev/fd/63 00:14:51.296 18:13:01 sw_hotplug -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:51.296 18:13:01 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # (( 0 > 0 )) 00:14:51.296 18:13:01 sw_hotplug -- nvme/sw_hotplug.sh@56 -- # echo 1 00:14:51.556 18:13:01 sw_hotplug -- nvme/sw_hotplug.sh@58 -- # for dev in "${nvmes[@]}" 00:14:51.556 18:13:01 sw_hotplug -- nvme/sw_hotplug.sh@59 -- # echo uio_pci_generic 00:14:51.556 18:13:01 sw_hotplug -- nvme/sw_hotplug.sh@60 -- # echo 0000:00:10.0 00:14:51.556 18:13:02 sw_hotplug -- nvme/sw_hotplug.sh@61 -- # echo 0000:00:10.0 00:14:51.556 18:13:02 sw_hotplug -- nvme/sw_hotplug.sh@62 -- # echo '' 00:14:51.556 18:13:02 sw_hotplug -- nvme/sw_hotplug.sh@58 -- # for dev in "${nvmes[@]}" 00:14:51.556 18:13:02 sw_hotplug -- nvme/sw_hotplug.sh@59 -- # echo uio_pci_generic 00:14:51.556 18:13:02 sw_hotplug -- nvme/sw_hotplug.sh@60 -- # echo 0000:00:11.0 00:14:51.815 18:13:02 sw_hotplug -- nvme/sw_hotplug.sh@61 -- # echo 0000:00:11.0 00:14:51.815 18:13:02 sw_hotplug -- nvme/sw_hotplug.sh@62 -- # echo '' 00:14:51.815 18:13:02 sw_hotplug -- nvme/sw_hotplug.sh@66 -- # sleep 12 00:15:04.015 18:13:14 sw_hotplug -- nvme/sw_hotplug.sh@68 -- # true 00:15:04.015 18:13:14 sw_hotplug -- nvme/sw_hotplug.sh@70 -- # bdfs=($(bdev_bdfs)) 00:15:04.015 18:13:14 sw_hotplug -- nvme/sw_hotplug.sh@70 -- # bdev_bdfs 00:15:04.015 18:13:14 sw_hotplug -- nvme/sw_hotplug.sh@13 -- # sort -u 00:15:04.015 18:13:14 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # rpc_cmd bdev_get_bdevs 00:15:04.015 18:13:14 sw_hotplug -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:04.015 18:13:14 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:15:04.015 18:13:14 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # jq -r '.[].driver_specific.nvme[].pci_address' /dev/fd/63 00:15:04.015 18:13:14 sw_hotplug -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:04.015 18:13:14 sw_hotplug -- nvme/sw_hotplug.sh@71 -- # [[ 0000:00:10.0 0000:00:11.0 == \0\0\0\0\:\0\0\:\1\0\.\0\ \0\0\0\0\:\0\0\:\1\1\.\0 ]] 00:15:04.015 18:13:14 sw_hotplug -- nvme/sw_hotplug.sh@38 -- # (( hotplug_events-- )) 00:15:04.015 18:13:14 sw_hotplug -- nvme/sw_hotplug.sh@39 -- # for dev in "${nvmes[@]}" 00:15:04.015 18:13:14 sw_hotplug -- nvme/sw_hotplug.sh@40 -- # echo 1 00:15:04.015 [2024-12-06 18:13:14.235868] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [0000:00:10.0, 0] in failed state. 00:15:04.015 [2024-12-06 18:13:14.238914] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:15:04.015 [2024-12-06 18:13:14.239078] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:190 nsid:0 cdw10:00000000 cdw11:00000000 00:15:04.015 [2024-12-06 18:13:14.239207] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:190 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:04.015 [2024-12-06 18:13:14.239300] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:15:04.015 [2024-12-06 18:13:14.239400] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:189 nsid:0 cdw10:00000000 cdw11:00000000 00:15:04.015 [2024-12-06 18:13:14.239466] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:189 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:04.015 [2024-12-06 18:13:14.239569] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:15:04.015 [2024-12-06 18:13:14.239615] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:188 nsid:0 cdw10:00000000 cdw11:00000000 00:15:04.015 [2024-12-06 18:13:14.239671] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:188 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:04.015 [2024-12-06 18:13:14.239738] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:15:04.015 [2024-12-06 18:13:14.239795] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:187 nsid:0 cdw10:00000000 cdw11:00000000 00:15:04.015 [2024-12-06 18:13:14.239915] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:187 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:04.015 18:13:14 sw_hotplug -- nvme/sw_hotplug.sh@39 -- # for dev in "${nvmes[@]}" 00:15:04.015 18:13:14 sw_hotplug -- nvme/sw_hotplug.sh@40 -- # echo 1 00:15:04.015 18:13:14 sw_hotplug -- nvme/sw_hotplug.sh@43 -- # true 00:15:04.015 18:13:14 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdfs=($(bdev_bdfs)) 00:15:04.015 18:13:14 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdev_bdfs 00:15:04.015 18:13:14 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # jq -r '.[].driver_specific.nvme[].pci_address' /dev/fd/63 00:15:04.015 18:13:14 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # rpc_cmd bdev_get_bdevs 00:15:04.015 18:13:14 sw_hotplug -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:04.015 18:13:14 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:15:04.015 18:13:14 sw_hotplug -- nvme/sw_hotplug.sh@13 -- # sort -u 00:15:04.015 18:13:14 sw_hotplug -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:04.015 18:13:14 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # (( 1 > 0 )) 00:15:04.015 18:13:14 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # sleep 0.5 00:15:04.275 [2024-12-06 18:13:14.834904] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [0000:00:11.0, 0] in failed state. 00:15:04.275 18:13:14 sw_hotplug -- nvme/sw_hotplug.sh@51 -- # printf 'Still waiting for %s to be gone\n' 0000:00:11.0 00:15:04.275 [2024-12-06 18:13:14.837641] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:15:04.275 [2024-12-06 18:13:14.837679] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:190 nsid:0 cdw10:00000000 cdw11:00000000 00:15:04.275 [2024-12-06 18:13:14.837702] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:190 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:04.275 [2024-12-06 18:13:14.837726] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:15:04.275 [2024-12-06 18:13:14.837741] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:189 nsid:0 cdw10:00000000 cdw11:00000000 00:15:04.275 [2024-12-06 18:13:14.837753] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:189 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:04.275 [2024-12-06 18:13:14.837768] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:15:04.275 [2024-12-06 18:13:14.837796] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:188 nsid:0 cdw10:00000000 cdw11:00000000 00:15:04.275 [2024-12-06 18:13:14.837811] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:188 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:04.275 [2024-12-06 18:13:14.837824] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:15:04.275 [2024-12-06 18:13:14.837839] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:187 nsid:0 cdw10:00000000 cdw11:00000000 00:15:04.275 [2024-12-06 18:13:14.837851] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:187 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:04.275 18:13:14 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdfs=($(bdev_bdfs)) 00:15:04.275 18:13:14 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdev_bdfs 00:15:04.275 18:13:14 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # jq -r '.[].driver_specific.nvme[].pci_address' /dev/fd/63 00:15:04.275 18:13:14 sw_hotplug -- nvme/sw_hotplug.sh@13 -- # sort -u 00:15:04.275 18:13:14 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # rpc_cmd bdev_get_bdevs 00:15:04.275 18:13:14 sw_hotplug -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:04.275 18:13:14 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:15:04.534 18:13:14 sw_hotplug -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:04.534 18:13:14 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # (( 0 > 0 )) 00:15:04.534 18:13:14 sw_hotplug -- nvme/sw_hotplug.sh@56 -- # echo 1 00:15:04.534 18:13:14 sw_hotplug -- nvme/sw_hotplug.sh@58 -- # for dev in "${nvmes[@]}" 00:15:04.534 18:13:14 sw_hotplug -- nvme/sw_hotplug.sh@59 -- # echo uio_pci_generic 00:15:04.534 18:13:14 sw_hotplug -- nvme/sw_hotplug.sh@60 -- # echo 0000:00:10.0 00:15:04.534 18:13:15 sw_hotplug -- nvme/sw_hotplug.sh@61 -- # echo 0000:00:10.0 00:15:04.793 18:13:15 sw_hotplug -- nvme/sw_hotplug.sh@62 -- # echo '' 00:15:04.793 18:13:15 sw_hotplug -- nvme/sw_hotplug.sh@58 -- # for dev in "${nvmes[@]}" 00:15:04.793 18:13:15 sw_hotplug -- nvme/sw_hotplug.sh@59 -- # echo uio_pci_generic 00:15:04.793 18:13:15 sw_hotplug -- nvme/sw_hotplug.sh@60 -- # echo 0000:00:11.0 00:15:04.793 18:13:15 sw_hotplug -- nvme/sw_hotplug.sh@61 -- # echo 0000:00:11.0 00:15:04.793 18:13:15 sw_hotplug -- nvme/sw_hotplug.sh@62 -- # echo '' 00:15:04.793 18:13:15 sw_hotplug -- nvme/sw_hotplug.sh@66 -- # sleep 12 00:15:16.996 18:13:27 sw_hotplug -- nvme/sw_hotplug.sh@68 -- # true 00:15:16.996 18:13:27 sw_hotplug -- nvme/sw_hotplug.sh@70 -- # bdfs=($(bdev_bdfs)) 00:15:16.996 18:13:27 sw_hotplug -- nvme/sw_hotplug.sh@70 -- # bdev_bdfs 00:15:16.996 18:13:27 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # jq -r '.[].driver_specific.nvme[].pci_address' /dev/fd/63 00:15:16.996 18:13:27 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # rpc_cmd bdev_get_bdevs 00:15:16.996 18:13:27 sw_hotplug -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:16.996 18:13:27 sw_hotplug -- nvme/sw_hotplug.sh@13 -- # sort -u 00:15:16.996 18:13:27 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:15:16.996 18:13:27 sw_hotplug -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:16.996 18:13:27 sw_hotplug -- nvme/sw_hotplug.sh@71 -- # [[ 0000:00:10.0 0000:00:11.0 == \0\0\0\0\:\0\0\:\1\0\.\0\ \0\0\0\0\:\0\0\:\1\1\.\0 ]] 00:15:16.996 18:13:27 sw_hotplug -- nvme/sw_hotplug.sh@38 -- # (( hotplug_events-- )) 00:15:16.996 18:13:27 sw_hotplug -- nvme/sw_hotplug.sh@39 -- # for dev in "${nvmes[@]}" 00:15:16.996 18:13:27 sw_hotplug -- nvme/sw_hotplug.sh@40 -- # echo 1 00:15:16.996 [2024-12-06 18:13:27.314831] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [0000:00:10.0, 0] in failed state. 00:15:16.996 18:13:27 sw_hotplug -- nvme/sw_hotplug.sh@39 -- # for dev in "${nvmes[@]}" 00:15:16.996 18:13:27 sw_hotplug -- nvme/sw_hotplug.sh@40 -- # echo 1 00:15:16.996 [2024-12-06 18:13:27.318187] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:15:16.996 [2024-12-06 18:13:27.318347] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:190 nsid:0 cdw10:00000000 cdw11:00000000 00:15:16.996 [2024-12-06 18:13:27.318585] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:190 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:16.996 [2024-12-06 18:13:27.318673] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:15:16.996 [2024-12-06 18:13:27.318714] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:189 nsid:0 cdw10:00000000 cdw11:00000000 00:15:16.996 [2024-12-06 18:13:27.318779] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:189 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:16.996 [2024-12-06 18:13:27.318840] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:15:16.996 [2024-12-06 18:13:27.318880] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:188 nsid:0 cdw10:00000000 cdw11:00000000 00:15:16.996 [2024-12-06 18:13:27.319007] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:188 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:16.996 [2024-12-06 18:13:27.319084] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:15:16.996 [2024-12-06 18:13:27.319122] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:187 nsid:0 cdw10:00000000 cdw11:00000000 00:15:16.996 [2024-12-06 18:13:27.319330] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:187 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:16.996 18:13:27 sw_hotplug -- nvme/sw_hotplug.sh@43 -- # true 00:15:16.996 18:13:27 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdfs=($(bdev_bdfs)) 00:15:16.996 18:13:27 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdev_bdfs 00:15:16.996 18:13:27 sw_hotplug -- nvme/sw_hotplug.sh@13 -- # sort -u 00:15:16.996 18:13:27 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # jq -r '.[].driver_specific.nvme[].pci_address' /dev/fd/63 00:15:16.996 18:13:27 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # rpc_cmd bdev_get_bdevs 00:15:16.996 18:13:27 sw_hotplug -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:16.996 18:13:27 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:15:16.996 18:13:27 sw_hotplug -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:16.997 18:13:27 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # (( 1 > 0 )) 00:15:16.997 18:13:27 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # sleep 0.5 00:15:17.254 [2024-12-06 18:13:27.714190] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [0000:00:11.0, 0] in failed state. 00:15:17.254 [2024-12-06 18:13:27.716730] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:15:17.254 [2024-12-06 18:13:27.716882] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:190 nsid:0 cdw10:00000000 cdw11:00000000 00:15:17.254 [2024-12-06 18:13:27.716913] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:190 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:17.254 [2024-12-06 18:13:27.716937] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:15:17.254 [2024-12-06 18:13:27.716953] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:189 nsid:0 cdw10:00000000 cdw11:00000000 00:15:17.254 [2024-12-06 18:13:27.716966] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:189 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:17.254 [2024-12-06 18:13:27.716983] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:15:17.254 [2024-12-06 18:13:27.716994] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:188 nsid:0 cdw10:00000000 cdw11:00000000 00:15:17.254 [2024-12-06 18:13:27.717012] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:188 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:17.254 [2024-12-06 18:13:27.717026] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:15:17.254 [2024-12-06 18:13:27.717041] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:187 nsid:0 cdw10:00000000 cdw11:00000000 00:15:17.254 [2024-12-06 18:13:27.717053] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:187 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:17.512 18:13:27 sw_hotplug -- nvme/sw_hotplug.sh@51 -- # printf 'Still waiting for %s to be gone\n' 0000:00:11.0 00:15:17.512 18:13:27 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdfs=($(bdev_bdfs)) 00:15:17.512 18:13:27 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdev_bdfs 00:15:17.512 18:13:27 sw_hotplug -- nvme/sw_hotplug.sh@13 -- # sort -u 00:15:17.512 18:13:27 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # jq -r '.[].driver_specific.nvme[].pci_address' /dev/fd/63 00:15:17.512 18:13:27 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # rpc_cmd bdev_get_bdevs 00:15:17.512 18:13:27 sw_hotplug -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:17.512 18:13:27 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:15:17.512 18:13:27 sw_hotplug -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:17.512 18:13:27 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # (( 0 > 0 )) 00:15:17.512 18:13:27 sw_hotplug -- nvme/sw_hotplug.sh@56 -- # echo 1 00:15:17.512 18:13:28 sw_hotplug -- nvme/sw_hotplug.sh@58 -- # for dev in "${nvmes[@]}" 00:15:17.512 18:13:28 sw_hotplug -- nvme/sw_hotplug.sh@59 -- # echo uio_pci_generic 00:15:17.512 18:13:28 sw_hotplug -- nvme/sw_hotplug.sh@60 -- # echo 0000:00:10.0 00:15:17.770 18:13:28 sw_hotplug -- nvme/sw_hotplug.sh@61 -- # echo 0000:00:10.0 00:15:17.770 18:13:28 sw_hotplug -- nvme/sw_hotplug.sh@62 -- # echo '' 00:15:17.770 18:13:28 sw_hotplug -- nvme/sw_hotplug.sh@58 -- # for dev in "${nvmes[@]}" 00:15:17.770 18:13:28 sw_hotplug -- nvme/sw_hotplug.sh@59 -- # echo uio_pci_generic 00:15:17.770 18:13:28 sw_hotplug -- nvme/sw_hotplug.sh@60 -- # echo 0000:00:11.0 00:15:17.770 18:13:28 sw_hotplug -- nvme/sw_hotplug.sh@61 -- # echo 0000:00:11.0 00:15:17.770 18:13:28 sw_hotplug -- nvme/sw_hotplug.sh@62 -- # echo '' 00:15:17.770 18:13:28 sw_hotplug -- nvme/sw_hotplug.sh@66 -- # sleep 12 00:15:30.062 18:13:40 sw_hotplug -- nvme/sw_hotplug.sh@68 -- # true 00:15:30.062 18:13:40 sw_hotplug -- nvme/sw_hotplug.sh@70 -- # bdfs=($(bdev_bdfs)) 00:15:30.062 18:13:40 sw_hotplug -- nvme/sw_hotplug.sh@70 -- # bdev_bdfs 00:15:30.062 18:13:40 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # jq -r '.[].driver_specific.nvme[].pci_address' /dev/fd/63 00:15:30.062 18:13:40 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # rpc_cmd bdev_get_bdevs 00:15:30.062 18:13:40 sw_hotplug -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:30.062 18:13:40 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:15:30.062 18:13:40 sw_hotplug -- nvme/sw_hotplug.sh@13 -- # sort -u 00:15:30.062 18:13:40 sw_hotplug -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:30.062 18:13:40 sw_hotplug -- nvme/sw_hotplug.sh@71 -- # [[ 0000:00:10.0 0000:00:11.0 == \0\0\0\0\:\0\0\:\1\0\.\0\ \0\0\0\0\:\0\0\:\1\1\.\0 ]] 00:15:30.062 18:13:40 sw_hotplug -- nvme/sw_hotplug.sh@38 -- # (( hotplug_events-- )) 00:15:30.062 18:13:40 sw_hotplug -- common/autotest_common.sh@719 -- # time=45.13 00:15:30.062 18:13:40 sw_hotplug -- common/autotest_common.sh@720 -- # echo 45.13 00:15:30.062 18:13:40 sw_hotplug -- common/autotest_common.sh@722 -- # return 0 00:15:30.062 18:13:40 sw_hotplug -- nvme/sw_hotplug.sh@21 -- # helper_time=45.13 00:15:30.063 18:13:40 sw_hotplug -- nvme/sw_hotplug.sh@22 -- # printf 'remove_attach_helper took %ss to complete (handling %u nvme drive(s))' 45.13 2 00:15:30.063 remove_attach_helper took 45.13s to complete (handling 2 nvme drive(s)) 18:13:40 sw_hotplug -- nvme/sw_hotplug.sh@119 -- # rpc_cmd bdev_nvme_set_hotplug -d 00:15:30.063 18:13:40 sw_hotplug -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:30.063 18:13:40 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:15:30.063 18:13:40 sw_hotplug -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:30.063 18:13:40 sw_hotplug -- nvme/sw_hotplug.sh@120 -- # rpc_cmd bdev_nvme_set_hotplug -e 00:15:30.063 18:13:40 sw_hotplug -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:30.063 18:13:40 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:15:30.063 18:13:40 sw_hotplug -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:30.063 18:13:40 sw_hotplug -- nvme/sw_hotplug.sh@122 -- # debug_remove_attach_helper 3 6 true 00:15:30.063 18:13:40 sw_hotplug -- nvme/sw_hotplug.sh@19 -- # local helper_time=0 00:15:30.063 18:13:40 sw_hotplug -- nvme/sw_hotplug.sh@21 -- # timing_cmd remove_attach_helper 3 6 true 00:15:30.063 18:13:40 sw_hotplug -- common/autotest_common.sh@709 -- # local cmd_es=0 00:15:30.063 18:13:40 sw_hotplug -- common/autotest_common.sh@711 -- # [[ -t 0 ]] 00:15:30.063 18:13:40 sw_hotplug -- common/autotest_common.sh@711 -- # exec 00:15:30.063 18:13:40 sw_hotplug -- common/autotest_common.sh@713 -- # local time=0 TIMEFORMAT=%2R 00:15:30.063 18:13:40 sw_hotplug -- common/autotest_common.sh@719 -- # remove_attach_helper 3 6 true 00:15:30.063 18:13:40 sw_hotplug -- nvme/sw_hotplug.sh@27 -- # local hotplug_events=3 00:15:30.063 18:13:40 sw_hotplug -- nvme/sw_hotplug.sh@28 -- # local hotplug_wait=6 00:15:30.063 18:13:40 sw_hotplug -- nvme/sw_hotplug.sh@29 -- # local use_bdev=true 00:15:30.063 18:13:40 sw_hotplug -- nvme/sw_hotplug.sh@30 -- # local dev bdfs 00:15:30.063 18:13:40 sw_hotplug -- nvme/sw_hotplug.sh@36 -- # sleep 6 00:15:36.650 18:13:46 sw_hotplug -- nvme/sw_hotplug.sh@38 -- # (( hotplug_events-- )) 00:15:36.650 18:13:46 sw_hotplug -- nvme/sw_hotplug.sh@39 -- # for dev in "${nvmes[@]}" 00:15:36.650 18:13:46 sw_hotplug -- nvme/sw_hotplug.sh@40 -- # echo 1 00:15:36.650 18:13:46 sw_hotplug -- nvme/sw_hotplug.sh@39 -- # for dev in "${nvmes[@]}" 00:15:36.650 18:13:46 sw_hotplug -- nvme/sw_hotplug.sh@40 -- # echo 1 00:15:36.650 18:13:46 sw_hotplug -- nvme/sw_hotplug.sh@43 -- # true 00:15:36.650 18:13:46 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdfs=($(bdev_bdfs)) 00:15:36.650 18:13:46 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdev_bdfs 00:15:36.650 18:13:46 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # jq -r '.[].driver_specific.nvme[].pci_address' /dev/fd/63 00:15:36.650 18:13:46 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # rpc_cmd bdev_get_bdevs 00:15:36.650 18:13:46 sw_hotplug -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:36.650 18:13:46 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:15:36.650 18:13:46 sw_hotplug -- nvme/sw_hotplug.sh@13 -- # sort -u 00:15:36.651 [2024-12-06 18:13:46.426775] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [0000:00:10.0, 0] in failed state. 00:15:36.651 [2024-12-06 18:13:46.429543] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:15:36.651 [2024-12-06 18:13:46.429708] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:190 nsid:0 cdw10:00000000 cdw11:00000000 00:15:36.651 [2024-12-06 18:13:46.429735] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:190 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:36.651 [2024-12-06 18:13:46.429767] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:15:36.651 [2024-12-06 18:13:46.429782] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:189 nsid:0 cdw10:00000000 cdw11:00000000 00:15:36.651 [2024-12-06 18:13:46.429799] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:189 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:36.651 [2024-12-06 18:13:46.429815] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:15:36.651 [2024-12-06 18:13:46.429832] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:188 nsid:0 cdw10:00000000 cdw11:00000000 00:15:36.651 [2024-12-06 18:13:46.429846] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:188 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:36.651 [2024-12-06 18:13:46.429865] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:15:36.651 [2024-12-06 18:13:46.429878] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:187 nsid:0 cdw10:00000000 cdw11:00000000 00:15:36.651 [2024-12-06 18:13:46.429901] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:187 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:36.651 18:13:46 sw_hotplug -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:36.651 18:13:46 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # (( 2 > 0 )) 00:15:36.651 18:13:46 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # sleep 0.5 00:15:36.651 [2024-12-06 18:13:46.826147] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [0000:00:11.0, 0] in failed state. 00:15:36.651 [2024-12-06 18:13:46.828210] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:15:36.651 [2024-12-06 18:13:46.828257] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:190 nsid:0 cdw10:00000000 cdw11:00000000 00:15:36.651 [2024-12-06 18:13:46.828433] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:190 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:36.651 [2024-12-06 18:13:46.828466] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:15:36.651 [2024-12-06 18:13:46.828486] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:189 nsid:0 cdw10:00000000 cdw11:00000000 00:15:36.651 [2024-12-06 18:13:46.828501] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:189 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:36.651 [2024-12-06 18:13:46.828520] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:15:36.651 [2024-12-06 18:13:46.828534] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:188 nsid:0 cdw10:00000000 cdw11:00000000 00:15:36.651 [2024-12-06 18:13:46.828551] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:188 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:36.651 [2024-12-06 18:13:46.828567] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:15:36.651 [2024-12-06 18:13:46.828583] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:187 nsid:0 cdw10:00000000 cdw11:00000000 00:15:36.651 [2024-12-06 18:13:46.828597] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:187 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:36.651 18:13:46 sw_hotplug -- nvme/sw_hotplug.sh@51 -- # printf 'Still waiting for %s to be gone\n' 0000:00:10.0 0000:00:11.0 00:15:36.651 18:13:46 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdfs=($(bdev_bdfs)) 00:15:36.651 18:13:46 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdev_bdfs 00:15:36.651 18:13:46 sw_hotplug -- nvme/sw_hotplug.sh@13 -- # sort -u 00:15:36.651 18:13:46 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # rpc_cmd bdev_get_bdevs 00:15:36.651 18:13:46 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # jq -r '.[].driver_specific.nvme[].pci_address' /dev/fd/63 00:15:36.651 18:13:46 sw_hotplug -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:36.651 18:13:46 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:15:36.651 18:13:46 sw_hotplug -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:36.651 18:13:47 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # (( 0 > 0 )) 00:15:36.651 18:13:47 sw_hotplug -- nvme/sw_hotplug.sh@56 -- # echo 1 00:15:36.651 18:13:47 sw_hotplug -- nvme/sw_hotplug.sh@58 -- # for dev in "${nvmes[@]}" 00:15:36.651 18:13:47 sw_hotplug -- nvme/sw_hotplug.sh@59 -- # echo uio_pci_generic 00:15:36.651 18:13:47 sw_hotplug -- nvme/sw_hotplug.sh@60 -- # echo 0000:00:10.0 00:15:36.651 18:13:47 sw_hotplug -- nvme/sw_hotplug.sh@61 -- # echo 0000:00:10.0 00:15:36.651 18:13:47 sw_hotplug -- nvme/sw_hotplug.sh@62 -- # echo '' 00:15:36.651 18:13:47 sw_hotplug -- nvme/sw_hotplug.sh@58 -- # for dev in "${nvmes[@]}" 00:15:36.651 18:13:47 sw_hotplug -- nvme/sw_hotplug.sh@59 -- # echo uio_pci_generic 00:15:36.651 18:13:47 sw_hotplug -- nvme/sw_hotplug.sh@60 -- # echo 0000:00:11.0 00:15:36.910 18:13:47 sw_hotplug -- nvme/sw_hotplug.sh@61 -- # echo 0000:00:11.0 00:15:36.910 18:13:47 sw_hotplug -- nvme/sw_hotplug.sh@62 -- # echo '' 00:15:36.910 18:13:47 sw_hotplug -- nvme/sw_hotplug.sh@66 -- # sleep 12 00:15:49.162 18:13:59 sw_hotplug -- nvme/sw_hotplug.sh@68 -- # true 00:15:49.162 18:13:59 sw_hotplug -- nvme/sw_hotplug.sh@70 -- # bdfs=($(bdev_bdfs)) 00:15:49.162 18:13:59 sw_hotplug -- nvme/sw_hotplug.sh@70 -- # bdev_bdfs 00:15:49.162 18:13:59 sw_hotplug -- nvme/sw_hotplug.sh@13 -- # sort -u 00:15:49.162 18:13:59 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # rpc_cmd bdev_get_bdevs 00:15:49.162 18:13:59 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # jq -r '.[].driver_specific.nvme[].pci_address' /dev/fd/63 00:15:49.162 18:13:59 sw_hotplug -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:49.162 18:13:59 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:15:49.162 18:13:59 sw_hotplug -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:49.162 18:13:59 sw_hotplug -- nvme/sw_hotplug.sh@71 -- # [[ 0000:00:10.0 0000:00:11.0 == \0\0\0\0\:\0\0\:\1\0\.\0\ \0\0\0\0\:\0\0\:\1\1\.\0 ]] 00:15:49.162 18:13:59 sw_hotplug -- nvme/sw_hotplug.sh@38 -- # (( hotplug_events-- )) 00:15:49.162 18:13:59 sw_hotplug -- nvme/sw_hotplug.sh@39 -- # for dev in "${nvmes[@]}" 00:15:49.162 18:13:59 sw_hotplug -- nvme/sw_hotplug.sh@40 -- # echo 1 00:15:49.162 [2024-12-06 18:13:59.405907] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [0000:00:10.0, 0] in failed state. 00:15:49.162 18:13:59 sw_hotplug -- nvme/sw_hotplug.sh@39 -- # for dev in "${nvmes[@]}" 00:15:49.162 18:13:59 sw_hotplug -- nvme/sw_hotplug.sh@40 -- # echo 1 00:15:49.162 [2024-12-06 18:13:59.409440] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:15:49.162 [2024-12-06 18:13:59.409538] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:190 nsid:0 cdw10:00000000 cdw11:00000000 00:15:49.162 [2024-12-06 18:13:59.409612] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:190 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:49.162 [2024-12-06 18:13:59.409688] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:15:49.162 [2024-12-06 18:13:59.409729] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:189 nsid:0 cdw10:00000000 cdw11:00000000 00:15:49.162 [2024-12-06 18:13:59.409806] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:189 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:49.162 [2024-12-06 18:13:59.409865] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:15:49.162 [2024-12-06 18:13:59.409907] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:188 nsid:0 cdw10:00000000 cdw11:00000000 00:15:49.162 [2024-12-06 18:13:59.409964] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:188 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:49.162 [2024-12-06 18:13:59.410062] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:15:49.162 [2024-12-06 18:13:59.410102] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:187 nsid:0 cdw10:00000000 cdw11:00000000 00:15:49.162 [2024-12-06 18:13:59.410291] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:187 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:49.162 18:13:59 sw_hotplug -- nvme/sw_hotplug.sh@43 -- # true 00:15:49.162 18:13:59 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdfs=($(bdev_bdfs)) 00:15:49.162 18:13:59 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdev_bdfs 00:15:49.162 18:13:59 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # jq -r '.[].driver_specific.nvme[].pci_address' /dev/fd/63 00:15:49.162 18:13:59 sw_hotplug -- nvme/sw_hotplug.sh@13 -- # sort -u 00:15:49.162 18:13:59 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # rpc_cmd bdev_get_bdevs 00:15:49.162 18:13:59 sw_hotplug -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:49.162 18:13:59 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:15:49.162 18:13:59 sw_hotplug -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:49.162 18:13:59 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # (( 1 > 0 )) 00:15:49.162 18:13:59 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # sleep 0.5 00:15:49.422 [2024-12-06 18:13:59.905121] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [0000:00:11.0, 0] in failed state. 00:15:49.422 [2024-12-06 18:13:59.907895] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:15:49.422 [2024-12-06 18:13:59.908071] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:190 nsid:0 cdw10:00000000 cdw11:00000000 00:15:49.422 [2024-12-06 18:13:59.908105] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:190 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:49.422 [2024-12-06 18:13:59.908133] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:15:49.422 [2024-12-06 18:13:59.908153] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:189 nsid:0 cdw10:00000000 cdw11:00000000 00:15:49.422 [2024-12-06 18:13:59.908167] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:189 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:49.422 [2024-12-06 18:13:59.908186] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:15:49.422 [2024-12-06 18:13:59.908200] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:188 nsid:0 cdw10:00000000 cdw11:00000000 00:15:49.422 [2024-12-06 18:13:59.908217] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:188 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:49.422 [2024-12-06 18:13:59.908233] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:15:49.422 [2024-12-06 18:13:59.908249] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:187 nsid:0 cdw10:00000000 cdw11:00000000 00:15:49.422 [2024-12-06 18:13:59.908274] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:187 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:49.422 18:13:59 sw_hotplug -- nvme/sw_hotplug.sh@51 -- # printf 'Still waiting for %s to be gone\n' 0000:00:11.0 00:15:49.422 18:13:59 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdfs=($(bdev_bdfs)) 00:15:49.422 18:13:59 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdev_bdfs 00:15:49.422 18:13:59 sw_hotplug -- nvme/sw_hotplug.sh@13 -- # sort -u 00:15:49.422 18:13:59 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # jq -r '.[].driver_specific.nvme[].pci_address' /dev/fd/63 00:15:49.422 18:13:59 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # rpc_cmd bdev_get_bdevs 00:15:49.422 18:13:59 sw_hotplug -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:49.422 18:13:59 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:15:49.680 18:13:59 sw_hotplug -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:49.680 18:14:00 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # (( 0 > 0 )) 00:15:49.680 18:14:00 sw_hotplug -- nvme/sw_hotplug.sh@56 -- # echo 1 00:15:49.680 18:14:00 sw_hotplug -- nvme/sw_hotplug.sh@58 -- # for dev in "${nvmes[@]}" 00:15:49.680 18:14:00 sw_hotplug -- nvme/sw_hotplug.sh@59 -- # echo uio_pci_generic 00:15:49.680 18:14:00 sw_hotplug -- nvme/sw_hotplug.sh@60 -- # echo 0000:00:10.0 00:15:49.680 18:14:00 sw_hotplug -- nvme/sw_hotplug.sh@61 -- # echo 0000:00:10.0 00:15:49.680 18:14:00 sw_hotplug -- nvme/sw_hotplug.sh@62 -- # echo '' 00:15:49.680 18:14:00 sw_hotplug -- nvme/sw_hotplug.sh@58 -- # for dev in "${nvmes[@]}" 00:15:49.680 18:14:00 sw_hotplug -- nvme/sw_hotplug.sh@59 -- # echo uio_pci_generic 00:15:49.680 18:14:00 sw_hotplug -- nvme/sw_hotplug.sh@60 -- # echo 0000:00:11.0 00:15:49.939 18:14:00 sw_hotplug -- nvme/sw_hotplug.sh@61 -- # echo 0000:00:11.0 00:15:49.939 18:14:00 sw_hotplug -- nvme/sw_hotplug.sh@62 -- # echo '' 00:15:49.939 18:14:00 sw_hotplug -- nvme/sw_hotplug.sh@66 -- # sleep 12 00:16:02.198 18:14:12 sw_hotplug -- nvme/sw_hotplug.sh@68 -- # true 00:16:02.198 18:14:12 sw_hotplug -- nvme/sw_hotplug.sh@70 -- # bdfs=($(bdev_bdfs)) 00:16:02.198 18:14:12 sw_hotplug -- nvme/sw_hotplug.sh@70 -- # bdev_bdfs 00:16:02.198 18:14:12 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # jq -r '.[].driver_specific.nvme[].pci_address' /dev/fd/63 00:16:02.198 18:14:12 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # rpc_cmd bdev_get_bdevs 00:16:02.198 18:14:12 sw_hotplug -- nvme/sw_hotplug.sh@13 -- # sort -u 00:16:02.198 18:14:12 sw_hotplug -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:02.198 18:14:12 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:16:02.198 18:14:12 sw_hotplug -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:02.198 18:14:12 sw_hotplug -- nvme/sw_hotplug.sh@71 -- # [[ 0000:00:10.0 0000:00:11.0 == \0\0\0\0\:\0\0\:\1\0\.\0\ \0\0\0\0\:\0\0\:\1\1\.\0 ]] 00:16:02.198 18:14:12 sw_hotplug -- nvme/sw_hotplug.sh@38 -- # (( hotplug_events-- )) 00:16:02.198 18:14:12 sw_hotplug -- nvme/sw_hotplug.sh@39 -- # for dev in "${nvmes[@]}" 00:16:02.198 18:14:12 sw_hotplug -- nvme/sw_hotplug.sh@40 -- # echo 1 00:16:02.198 18:14:12 sw_hotplug -- nvme/sw_hotplug.sh@39 -- # for dev in "${nvmes[@]}" 00:16:02.198 18:14:12 sw_hotplug -- nvme/sw_hotplug.sh@40 -- # echo 1 00:16:02.198 18:14:12 sw_hotplug -- nvme/sw_hotplug.sh@43 -- # true 00:16:02.198 18:14:12 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdfs=($(bdev_bdfs)) 00:16:02.198 18:14:12 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdev_bdfs 00:16:02.198 18:14:12 sw_hotplug -- nvme/sw_hotplug.sh@13 -- # sort -u 00:16:02.198 18:14:12 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # jq -r '.[].driver_specific.nvme[].pci_address' /dev/fd/63 00:16:02.198 [2024-12-06 18:14:12.484907] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [0000:00:10.0, 0] in failed state. 00:16:02.198 18:14:12 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # rpc_cmd bdev_get_bdevs 00:16:02.198 18:14:12 sw_hotplug -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:02.198 18:14:12 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:16:02.198 [2024-12-06 18:14:12.487214] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:16:02.198 [2024-12-06 18:14:12.487329] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:190 nsid:0 cdw10:00000000 cdw11:00000000 00:16:02.198 [2024-12-06 18:14:12.487405] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:190 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:02.198 [2024-12-06 18:14:12.487592] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:16:02.198 [2024-12-06 18:14:12.487760] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:189 nsid:0 cdw10:00000000 cdw11:00000000 00:16:02.198 [2024-12-06 18:14:12.487895] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:189 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:02.198 [2024-12-06 18:14:12.487970] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:16:02.198 [2024-12-06 18:14:12.488115] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:188 nsid:0 cdw10:00000000 cdw11:00000000 00:16:02.198 [2024-12-06 18:14:12.488242] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:188 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:02.198 [2024-12-06 18:14:12.488383] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:16:02.198 [2024-12-06 18:14:12.488427] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:187 nsid:0 cdw10:00000000 cdw11:00000000 00:16:02.198 [2024-12-06 18:14:12.488623] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:187 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:02.198 18:14:12 sw_hotplug -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:02.198 18:14:12 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # (( 1 > 0 )) 00:16:02.198 18:14:12 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # sleep 0.5 00:16:02.766 18:14:13 sw_hotplug -- nvme/sw_hotplug.sh@51 -- # printf 'Still waiting for %s to be gone\n' 0000:00:11.0 00:16:02.766 18:14:13 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdfs=($(bdev_bdfs)) 00:16:02.766 18:14:13 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdev_bdfs 00:16:02.766 18:14:13 sw_hotplug -- nvme/sw_hotplug.sh@13 -- # sort -u 00:16:02.766 18:14:13 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # jq -r '.[].driver_specific.nvme[].pci_address' /dev/fd/63 00:16:02.766 18:14:13 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # rpc_cmd bdev_get_bdevs 00:16:02.766 18:14:13 sw_hotplug -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:02.766 18:14:13 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:16:02.766 18:14:13 sw_hotplug -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:02.766 [2024-12-06 18:14:13.083943] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [0000:00:11.0, 0] in failed state. 00:16:02.766 18:14:13 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # (( 1 > 0 )) 00:16:02.766 18:14:13 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # sleep 0.5 00:16:02.766 [2024-12-06 18:14:13.087328] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:16:02.766 [2024-12-06 18:14:13.087497] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:190 nsid:0 cdw10:00000000 cdw11:00000000 00:16:02.766 [2024-12-06 18:14:13.087666] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:190 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:02.766 [2024-12-06 18:14:13.087866] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:16:02.766 [2024-12-06 18:14:13.087993] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:189 nsid:0 cdw10:00000000 cdw11:00000000 00:16:02.766 [2024-12-06 18:14:13.088146] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:189 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:02.766 [2024-12-06 18:14:13.088327] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:16:02.766 [2024-12-06 18:14:13.088412] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:188 nsid:0 cdw10:00000000 cdw11:00000000 00:16:02.766 [2024-12-06 18:14:13.088524] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:188 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:02.766 [2024-12-06 18:14:13.088636] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:16:02.766 [2024-12-06 18:14:13.088690] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:187 nsid:0 cdw10:00000000 cdw11:00000000 00:16:02.766 [2024-12-06 18:14:13.088806] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:187 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:03.025 18:14:13 sw_hotplug -- nvme/sw_hotplug.sh@51 -- # printf 'Still waiting for %s to be gone\n' 0000:00:11.0 00:16:03.025 18:14:13 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdfs=($(bdev_bdfs)) 00:16:03.025 18:14:13 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdev_bdfs 00:16:03.025 18:14:13 sw_hotplug -- nvme/sw_hotplug.sh@13 -- # sort -u 00:16:03.025 18:14:13 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # jq -r '.[].driver_specific.nvme[].pci_address' /dev/fd/63 00:16:03.025 18:14:13 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # rpc_cmd bdev_get_bdevs 00:16:03.025 18:14:13 sw_hotplug -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:03.025 18:14:13 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:16:03.284 18:14:13 sw_hotplug -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:03.284 18:14:13 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # (( 0 > 0 )) 00:16:03.284 18:14:13 sw_hotplug -- nvme/sw_hotplug.sh@56 -- # echo 1 00:16:03.284 18:14:13 sw_hotplug -- nvme/sw_hotplug.sh@58 -- # for dev in "${nvmes[@]}" 00:16:03.284 18:14:13 sw_hotplug -- nvme/sw_hotplug.sh@59 -- # echo uio_pci_generic 00:16:03.284 18:14:13 sw_hotplug -- nvme/sw_hotplug.sh@60 -- # echo 0000:00:10.0 00:16:03.284 18:14:13 sw_hotplug -- nvme/sw_hotplug.sh@61 -- # echo 0000:00:10.0 00:16:03.543 18:14:13 sw_hotplug -- nvme/sw_hotplug.sh@62 -- # echo '' 00:16:03.543 18:14:13 sw_hotplug -- nvme/sw_hotplug.sh@58 -- # for dev in "${nvmes[@]}" 00:16:03.543 18:14:13 sw_hotplug -- nvme/sw_hotplug.sh@59 -- # echo uio_pci_generic 00:16:03.543 18:14:13 sw_hotplug -- nvme/sw_hotplug.sh@60 -- # echo 0000:00:11.0 00:16:03.543 18:14:13 sw_hotplug -- nvme/sw_hotplug.sh@61 -- # echo 0000:00:11.0 00:16:03.543 18:14:14 sw_hotplug -- nvme/sw_hotplug.sh@62 -- # echo '' 00:16:03.543 18:14:14 sw_hotplug -- nvme/sw_hotplug.sh@66 -- # sleep 12 00:16:15.749 18:14:26 sw_hotplug -- nvme/sw_hotplug.sh@68 -- # true 00:16:15.749 18:14:26 sw_hotplug -- nvme/sw_hotplug.sh@70 -- # bdfs=($(bdev_bdfs)) 00:16:15.749 18:14:26 sw_hotplug -- nvme/sw_hotplug.sh@70 -- # bdev_bdfs 00:16:15.749 18:14:26 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # jq -r '.[].driver_specific.nvme[].pci_address' /dev/fd/63 00:16:15.749 18:14:26 sw_hotplug -- nvme/sw_hotplug.sh@13 -- # sort -u 00:16:15.749 18:14:26 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # rpc_cmd bdev_get_bdevs 00:16:15.749 18:14:26 sw_hotplug -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:15.749 18:14:26 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:16:15.749 18:14:26 sw_hotplug -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:15.749 18:14:26 sw_hotplug -- nvme/sw_hotplug.sh@71 -- # [[ 0000:00:10.0 0000:00:11.0 == \0\0\0\0\:\0\0\:\1\0\.\0\ \0\0\0\0\:\0\0\:\1\1\.\0 ]] 00:16:15.749 18:14:26 sw_hotplug -- nvme/sw_hotplug.sh@38 -- # (( hotplug_events-- )) 00:16:15.749 18:14:26 sw_hotplug -- common/autotest_common.sh@719 -- # time=45.71 00:16:15.749 18:14:26 sw_hotplug -- common/autotest_common.sh@720 -- # echo 45.71 00:16:15.749 18:14:26 sw_hotplug -- common/autotest_common.sh@722 -- # return 0 00:16:15.749 18:14:26 sw_hotplug -- nvme/sw_hotplug.sh@21 -- # helper_time=45.71 00:16:15.749 18:14:26 sw_hotplug -- nvme/sw_hotplug.sh@22 -- # printf 'remove_attach_helper took %ss to complete (handling %u nvme drive(s))' 45.71 2 00:16:15.749 remove_attach_helper took 45.71s to complete (handling 2 nvme drive(s)) 18:14:26 sw_hotplug -- nvme/sw_hotplug.sh@124 -- # trap - SIGINT SIGTERM EXIT 00:16:15.749 18:14:26 sw_hotplug -- nvme/sw_hotplug.sh@125 -- # killprocess 68655 00:16:15.749 18:14:26 sw_hotplug -- common/autotest_common.sh@954 -- # '[' -z 68655 ']' 00:16:15.749 18:14:26 sw_hotplug -- common/autotest_common.sh@958 -- # kill -0 68655 00:16:15.749 18:14:26 sw_hotplug -- common/autotest_common.sh@959 -- # uname 00:16:15.749 18:14:26 sw_hotplug -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:16:15.749 18:14:26 sw_hotplug -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 68655 00:16:15.749 18:14:26 sw_hotplug -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:16:15.749 18:14:26 sw_hotplug -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:16:15.749 18:14:26 sw_hotplug -- common/autotest_common.sh@972 -- # echo 'killing process with pid 68655' 00:16:15.749 killing process with pid 68655 00:16:15.749 18:14:26 sw_hotplug -- common/autotest_common.sh@973 -- # kill 68655 00:16:15.749 18:14:26 sw_hotplug -- common/autotest_common.sh@978 -- # wait 68655 00:16:18.283 18:14:28 sw_hotplug -- nvme/sw_hotplug.sh@154 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:16:18.542 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:16:19.112 0000:00:11.0 (1b36 0010): Already using the uio_pci_generic driver 00:16:19.112 0000:00:10.0 (1b36 0010): Already using the uio_pci_generic driver 00:16:19.373 0000:00:13.0 (1b36 0010): nvme -> uio_pci_generic 00:16:19.373 0000:00:12.0 (1b36 0010): nvme -> uio_pci_generic 00:16:19.373 00:16:19.373 real 2m34.360s 00:16:19.373 user 1m51.818s 00:16:19.373 sys 0m22.677s 00:16:19.373 18:14:29 sw_hotplug -- common/autotest_common.sh@1130 -- # xtrace_disable 00:16:19.373 18:14:29 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:16:19.373 ************************************ 00:16:19.373 END TEST sw_hotplug 00:16:19.373 ************************************ 00:16:19.373 18:14:29 -- spdk/autotest.sh@243 -- # [[ 1 -eq 1 ]] 00:16:19.373 18:14:29 -- spdk/autotest.sh@244 -- # run_test nvme_xnvme /home/vagrant/spdk_repo/spdk/test/nvme/xnvme/xnvme.sh 00:16:19.373 18:14:29 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:16:19.373 18:14:29 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:16:19.373 18:14:29 -- common/autotest_common.sh@10 -- # set +x 00:16:19.373 ************************************ 00:16:19.373 START TEST nvme_xnvme 00:16:19.373 ************************************ 00:16:19.373 18:14:29 nvme_xnvme -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/nvme/xnvme/xnvme.sh 00:16:19.653 * Looking for test storage... 00:16:19.654 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvme/xnvme 00:16:19.654 18:14:30 nvme_xnvme -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:16:19.654 18:14:30 nvme_xnvme -- common/autotest_common.sh@1711 -- # lcov --version 00:16:19.654 18:14:30 nvme_xnvme -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:16:19.654 18:14:30 nvme_xnvme -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:16:19.654 18:14:30 nvme_xnvme -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:16:19.654 18:14:30 nvme_xnvme -- scripts/common.sh@333 -- # local ver1 ver1_l 00:16:19.654 18:14:30 nvme_xnvme -- scripts/common.sh@334 -- # local ver2 ver2_l 00:16:19.654 18:14:30 nvme_xnvme -- scripts/common.sh@336 -- # IFS=.-: 00:16:19.654 18:14:30 nvme_xnvme -- scripts/common.sh@336 -- # read -ra ver1 00:16:19.654 18:14:30 nvme_xnvme -- scripts/common.sh@337 -- # IFS=.-: 00:16:19.654 18:14:30 nvme_xnvme -- scripts/common.sh@337 -- # read -ra ver2 00:16:19.654 18:14:30 nvme_xnvme -- scripts/common.sh@338 -- # local 'op=<' 00:16:19.654 18:14:30 nvme_xnvme -- scripts/common.sh@340 -- # ver1_l=2 00:16:19.654 18:14:30 nvme_xnvme -- scripts/common.sh@341 -- # ver2_l=1 00:16:19.654 18:14:30 nvme_xnvme -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:16:19.654 18:14:30 nvme_xnvme -- scripts/common.sh@344 -- # case "$op" in 00:16:19.654 18:14:30 nvme_xnvme -- scripts/common.sh@345 -- # : 1 00:16:19.654 18:14:30 nvme_xnvme -- scripts/common.sh@364 -- # (( v = 0 )) 00:16:19.654 18:14:30 nvme_xnvme -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:16:19.654 18:14:30 nvme_xnvme -- scripts/common.sh@365 -- # decimal 1 00:16:19.654 18:14:30 nvme_xnvme -- scripts/common.sh@353 -- # local d=1 00:16:19.654 18:14:30 nvme_xnvme -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:16:19.654 18:14:30 nvme_xnvme -- scripts/common.sh@355 -- # echo 1 00:16:19.654 18:14:30 nvme_xnvme -- scripts/common.sh@365 -- # ver1[v]=1 00:16:19.654 18:14:30 nvme_xnvme -- scripts/common.sh@366 -- # decimal 2 00:16:19.654 18:14:30 nvme_xnvme -- scripts/common.sh@353 -- # local d=2 00:16:19.654 18:14:30 nvme_xnvme -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:16:19.655 18:14:30 nvme_xnvme -- scripts/common.sh@355 -- # echo 2 00:16:19.655 18:14:30 nvme_xnvme -- scripts/common.sh@366 -- # ver2[v]=2 00:16:19.655 18:14:30 nvme_xnvme -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:16:19.655 18:14:30 nvme_xnvme -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:16:19.655 18:14:30 nvme_xnvme -- scripts/common.sh@368 -- # return 0 00:16:19.655 18:14:30 nvme_xnvme -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:16:19.655 18:14:30 nvme_xnvme -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:16:19.655 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:16:19.655 --rc genhtml_branch_coverage=1 00:16:19.655 --rc genhtml_function_coverage=1 00:16:19.655 --rc genhtml_legend=1 00:16:19.655 --rc geninfo_all_blocks=1 00:16:19.655 --rc geninfo_unexecuted_blocks=1 00:16:19.655 00:16:19.655 ' 00:16:19.655 18:14:30 nvme_xnvme -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:16:19.655 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:16:19.655 --rc genhtml_branch_coverage=1 00:16:19.655 --rc genhtml_function_coverage=1 00:16:19.655 --rc genhtml_legend=1 00:16:19.655 --rc geninfo_all_blocks=1 00:16:19.655 --rc geninfo_unexecuted_blocks=1 00:16:19.655 00:16:19.655 ' 00:16:19.655 18:14:30 nvme_xnvme -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:16:19.655 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:16:19.655 --rc genhtml_branch_coverage=1 00:16:19.655 --rc genhtml_function_coverage=1 00:16:19.655 --rc genhtml_legend=1 00:16:19.655 --rc geninfo_all_blocks=1 00:16:19.655 --rc geninfo_unexecuted_blocks=1 00:16:19.655 00:16:19.655 ' 00:16:19.655 18:14:30 nvme_xnvme -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:16:19.655 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:16:19.655 --rc genhtml_branch_coverage=1 00:16:19.655 --rc genhtml_function_coverage=1 00:16:19.655 --rc genhtml_legend=1 00:16:19.655 --rc geninfo_all_blocks=1 00:16:19.655 --rc geninfo_unexecuted_blocks=1 00:16:19.655 00:16:19.655 ' 00:16:19.655 18:14:30 nvme_xnvme -- xnvme/common.sh@10 -- # source /home/vagrant/spdk_repo/spdk/test/dd/common.sh 00:16:19.655 18:14:30 nvme_xnvme -- dd/common.sh@6 -- # source /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh 00:16:19.655 18:14:30 nvme_xnvme -- common/autotest_common.sh@7 -- # rpc_py=rpc_cmd 00:16:19.655 18:14:30 nvme_xnvme -- common/autotest_common.sh@34 -- # set -e 00:16:19.655 18:14:30 nvme_xnvme -- common/autotest_common.sh@35 -- # shopt -s nullglob 00:16:19.655 18:14:30 nvme_xnvme -- common/autotest_common.sh@36 -- # shopt -s extglob 00:16:19.655 18:14:30 nvme_xnvme -- common/autotest_common.sh@37 -- # shopt -s inherit_errexit 00:16:19.656 18:14:30 nvme_xnvme -- common/autotest_common.sh@39 -- # '[' -z /home/vagrant/spdk_repo/spdk/../output ']' 00:16:19.656 18:14:30 nvme_xnvme -- common/autotest_common.sh@44 -- # [[ -e /home/vagrant/spdk_repo/spdk/test/common/build_config.sh ]] 00:16:19.656 18:14:30 nvme_xnvme -- common/autotest_common.sh@45 -- # source /home/vagrant/spdk_repo/spdk/test/common/build_config.sh 00:16:19.656 18:14:30 nvme_xnvme -- common/build_config.sh@1 -- # CONFIG_WPDK_DIR= 00:16:19.656 18:14:30 nvme_xnvme -- common/build_config.sh@2 -- # CONFIG_ASAN=y 00:16:19.656 18:14:30 nvme_xnvme -- common/build_config.sh@3 -- # CONFIG_VBDEV_COMPRESS=n 00:16:19.656 18:14:30 nvme_xnvme -- common/build_config.sh@4 -- # CONFIG_HAVE_EXECINFO_H=y 00:16:19.656 18:14:30 nvme_xnvme -- common/build_config.sh@5 -- # CONFIG_USDT=n 00:16:19.656 18:14:30 nvme_xnvme -- common/build_config.sh@6 -- # CONFIG_CUSTOMOCF=n 00:16:19.656 18:14:30 nvme_xnvme -- common/build_config.sh@7 -- # CONFIG_PREFIX=/usr/local 00:16:19.656 18:14:30 nvme_xnvme -- common/build_config.sh@8 -- # CONFIG_RBD=n 00:16:19.656 18:14:30 nvme_xnvme -- common/build_config.sh@9 -- # CONFIG_LIBDIR= 00:16:19.656 18:14:30 nvme_xnvme -- common/build_config.sh@10 -- # CONFIG_IDXD=y 00:16:19.656 18:14:30 nvme_xnvme -- common/build_config.sh@11 -- # CONFIG_NVME_CUSE=y 00:16:19.656 18:14:30 nvme_xnvme -- common/build_config.sh@12 -- # CONFIG_SMA=n 00:16:19.656 18:14:30 nvme_xnvme -- common/build_config.sh@13 -- # CONFIG_VTUNE=n 00:16:19.656 18:14:30 nvme_xnvme -- common/build_config.sh@14 -- # CONFIG_TSAN=n 00:16:19.656 18:14:30 nvme_xnvme -- common/build_config.sh@15 -- # CONFIG_RDMA_SEND_WITH_INVAL=y 00:16:19.656 18:14:30 nvme_xnvme -- common/build_config.sh@16 -- # CONFIG_VFIO_USER_DIR= 00:16:19.656 18:14:30 nvme_xnvme -- common/build_config.sh@17 -- # CONFIG_MAX_NUMA_NODES=1 00:16:19.656 18:14:30 nvme_xnvme -- common/build_config.sh@18 -- # CONFIG_PGO_CAPTURE=n 00:16:19.656 18:14:30 nvme_xnvme -- common/build_config.sh@19 -- # CONFIG_HAVE_UUID_GENERATE_SHA1=y 00:16:19.656 18:14:30 nvme_xnvme -- common/build_config.sh@20 -- # CONFIG_ENV=/home/vagrant/spdk_repo/spdk/lib/env_dpdk 00:16:19.656 18:14:30 nvme_xnvme -- common/build_config.sh@21 -- # CONFIG_LTO=n 00:16:19.656 18:14:30 nvme_xnvme -- common/build_config.sh@22 -- # CONFIG_ISCSI_INITIATOR=y 00:16:19.656 18:14:30 nvme_xnvme -- common/build_config.sh@23 -- # CONFIG_CET=n 00:16:19.656 18:14:30 nvme_xnvme -- common/build_config.sh@24 -- # CONFIG_VBDEV_COMPRESS_MLX5=n 00:16:19.656 18:14:30 nvme_xnvme -- common/build_config.sh@25 -- # CONFIG_OCF_PATH= 00:16:19.656 18:14:30 nvme_xnvme -- common/build_config.sh@26 -- # CONFIG_RDMA_SET_TOS=y 00:16:19.656 18:14:30 nvme_xnvme -- common/build_config.sh@27 -- # CONFIG_AIO_FSDEV=y 00:16:19.656 18:14:30 nvme_xnvme -- common/build_config.sh@28 -- # CONFIG_HAVE_ARC4RANDOM=y 00:16:19.656 18:14:30 nvme_xnvme -- common/build_config.sh@29 -- # CONFIG_HAVE_LIBARCHIVE=n 00:16:19.656 18:14:30 nvme_xnvme -- common/build_config.sh@30 -- # CONFIG_UBLK=y 00:16:19.656 18:14:30 nvme_xnvme -- common/build_config.sh@31 -- # CONFIG_ISAL_CRYPTO=y 00:16:19.656 18:14:30 nvme_xnvme -- common/build_config.sh@32 -- # CONFIG_OPENSSL_PATH= 00:16:19.656 18:14:30 nvme_xnvme -- common/build_config.sh@33 -- # CONFIG_OCF=n 00:16:19.656 18:14:30 nvme_xnvme -- common/build_config.sh@34 -- # CONFIG_FUSE=n 00:16:19.657 18:14:30 nvme_xnvme -- common/build_config.sh@35 -- # CONFIG_VTUNE_DIR= 00:16:19.657 18:14:30 nvme_xnvme -- common/build_config.sh@36 -- # CONFIG_FUZZER_LIB= 00:16:19.657 18:14:30 nvme_xnvme -- common/build_config.sh@37 -- # CONFIG_FUZZER=n 00:16:19.657 18:14:30 nvme_xnvme -- common/build_config.sh@38 -- # CONFIG_FSDEV=y 00:16:19.657 18:14:30 nvme_xnvme -- common/build_config.sh@39 -- # CONFIG_DPDK_DIR=/home/vagrant/spdk_repo/spdk/dpdk/build 00:16:19.657 18:14:30 nvme_xnvme -- common/build_config.sh@40 -- # CONFIG_CRYPTO=n 00:16:19.657 18:14:30 nvme_xnvme -- common/build_config.sh@41 -- # CONFIG_PGO_USE=n 00:16:19.657 18:14:30 nvme_xnvme -- common/build_config.sh@42 -- # CONFIG_VHOST=y 00:16:19.657 18:14:30 nvme_xnvme -- common/build_config.sh@43 -- # CONFIG_DAOS=n 00:16:19.657 18:14:30 nvme_xnvme -- common/build_config.sh@44 -- # CONFIG_DPDK_INC_DIR= 00:16:19.657 18:14:30 nvme_xnvme -- common/build_config.sh@45 -- # CONFIG_DAOS_DIR= 00:16:19.657 18:14:30 nvme_xnvme -- common/build_config.sh@46 -- # CONFIG_UNIT_TESTS=n 00:16:19.657 18:14:30 nvme_xnvme -- common/build_config.sh@47 -- # CONFIG_RDMA_SET_ACK_TIMEOUT=y 00:16:19.657 18:14:30 nvme_xnvme -- common/build_config.sh@48 -- # CONFIG_VIRTIO=y 00:16:19.657 18:14:30 nvme_xnvme -- common/build_config.sh@49 -- # CONFIG_DPDK_UADK=n 00:16:19.657 18:14:30 nvme_xnvme -- common/build_config.sh@50 -- # CONFIG_COVERAGE=y 00:16:19.657 18:14:30 nvme_xnvme -- common/build_config.sh@51 -- # CONFIG_RDMA=y 00:16:19.657 18:14:30 nvme_xnvme -- common/build_config.sh@52 -- # CONFIG_HAVE_STRUCT_STAT_ST_ATIM=y 00:16:19.657 18:14:30 nvme_xnvme -- common/build_config.sh@53 -- # CONFIG_HAVE_LZ4=n 00:16:19.657 18:14:30 nvme_xnvme -- common/build_config.sh@54 -- # CONFIG_FIO_SOURCE_DIR=/usr/src/fio 00:16:19.657 18:14:30 nvme_xnvme -- common/build_config.sh@55 -- # CONFIG_URING_PATH= 00:16:19.657 18:14:30 nvme_xnvme -- common/build_config.sh@56 -- # CONFIG_XNVME=y 00:16:19.657 18:14:30 nvme_xnvme -- common/build_config.sh@57 -- # CONFIG_VFIO_USER=n 00:16:19.657 18:14:30 nvme_xnvme -- common/build_config.sh@58 -- # CONFIG_ARCH=native 00:16:19.657 18:14:30 nvme_xnvme -- common/build_config.sh@59 -- # CONFIG_HAVE_EVP_MAC=y 00:16:19.657 18:14:30 nvme_xnvme -- common/build_config.sh@60 -- # CONFIG_URING_ZNS=n 00:16:19.657 18:14:30 nvme_xnvme -- common/build_config.sh@61 -- # CONFIG_WERROR=y 00:16:19.657 18:14:30 nvme_xnvme -- common/build_config.sh@62 -- # CONFIG_HAVE_LIBBSD=n 00:16:19.657 18:14:30 nvme_xnvme -- common/build_config.sh@63 -- # CONFIG_UBSAN=y 00:16:19.657 18:14:30 nvme_xnvme -- common/build_config.sh@64 -- # CONFIG_HAVE_STRUCT_STAT_ST_ATIMESPEC=n 00:16:19.657 18:14:30 nvme_xnvme -- common/build_config.sh@65 -- # CONFIG_IPSEC_MB_DIR= 00:16:19.657 18:14:30 nvme_xnvme -- common/build_config.sh@66 -- # CONFIG_GOLANG=n 00:16:19.659 18:14:30 nvme_xnvme -- common/build_config.sh@67 -- # CONFIG_ISAL=y 00:16:19.659 18:14:30 nvme_xnvme -- common/build_config.sh@68 -- # CONFIG_IDXD_KERNEL=y 00:16:19.659 18:14:30 nvme_xnvme -- common/build_config.sh@69 -- # CONFIG_DPDK_LIB_DIR= 00:16:19.659 18:14:30 nvme_xnvme -- common/build_config.sh@70 -- # CONFIG_RDMA_PROV=verbs 00:16:19.659 18:14:30 nvme_xnvme -- common/build_config.sh@71 -- # CONFIG_APPS=y 00:16:19.659 18:14:30 nvme_xnvme -- common/build_config.sh@72 -- # CONFIG_SHARED=y 00:16:19.659 18:14:30 nvme_xnvme -- common/build_config.sh@73 -- # CONFIG_HAVE_KEYUTILS=y 00:16:19.659 18:14:30 nvme_xnvme -- common/build_config.sh@74 -- # CONFIG_FC_PATH= 00:16:19.659 18:14:30 nvme_xnvme -- common/build_config.sh@75 -- # CONFIG_DPDK_PKG_CONFIG=n 00:16:19.659 18:14:30 nvme_xnvme -- common/build_config.sh@76 -- # CONFIG_FC=n 00:16:19.659 18:14:30 nvme_xnvme -- common/build_config.sh@77 -- # CONFIG_AVAHI=n 00:16:19.659 18:14:30 nvme_xnvme -- common/build_config.sh@78 -- # CONFIG_FIO_PLUGIN=y 00:16:19.659 18:14:30 nvme_xnvme -- common/build_config.sh@79 -- # CONFIG_RAID5F=n 00:16:19.659 18:14:30 nvme_xnvme -- common/build_config.sh@80 -- # CONFIG_EXAMPLES=y 00:16:19.659 18:14:30 nvme_xnvme -- common/build_config.sh@81 -- # CONFIG_TESTS=y 00:16:19.659 18:14:30 nvme_xnvme -- common/build_config.sh@82 -- # CONFIG_CRYPTO_MLX5=n 00:16:19.659 18:14:30 nvme_xnvme -- common/build_config.sh@83 -- # CONFIG_MAX_LCORES=128 00:16:19.659 18:14:30 nvme_xnvme -- common/build_config.sh@84 -- # CONFIG_IPSEC_MB=n 00:16:19.659 18:14:30 nvme_xnvme -- common/build_config.sh@85 -- # CONFIG_PGO_DIR= 00:16:19.659 18:14:30 nvme_xnvme -- common/build_config.sh@86 -- # CONFIG_DEBUG=y 00:16:19.659 18:14:30 nvme_xnvme -- common/build_config.sh@87 -- # CONFIG_DPDK_COMPRESSDEV=n 00:16:19.659 18:14:30 nvme_xnvme -- common/build_config.sh@88 -- # CONFIG_CROSS_PREFIX= 00:16:19.659 18:14:30 nvme_xnvme -- common/build_config.sh@89 -- # CONFIG_COPY_FILE_RANGE=y 00:16:19.659 18:14:30 nvme_xnvme -- common/build_config.sh@90 -- # CONFIG_URING=n 00:16:19.659 18:14:30 nvme_xnvme -- common/autotest_common.sh@54 -- # source /home/vagrant/spdk_repo/spdk/test/common/applications.sh 00:16:19.659 18:14:30 nvme_xnvme -- common/applications.sh@8 -- # dirname /home/vagrant/spdk_repo/spdk/test/common/applications.sh 00:16:19.659 18:14:30 nvme_xnvme -- common/applications.sh@8 -- # readlink -f /home/vagrant/spdk_repo/spdk/test/common 00:16:19.659 18:14:30 nvme_xnvme -- common/applications.sh@8 -- # _root=/home/vagrant/spdk_repo/spdk/test/common 00:16:19.659 18:14:30 nvme_xnvme -- common/applications.sh@9 -- # _root=/home/vagrant/spdk_repo/spdk 00:16:19.659 18:14:30 nvme_xnvme -- common/applications.sh@10 -- # _app_dir=/home/vagrant/spdk_repo/spdk/build/bin 00:16:19.660 18:14:30 nvme_xnvme -- common/applications.sh@11 -- # _test_app_dir=/home/vagrant/spdk_repo/spdk/test/app 00:16:19.660 18:14:30 nvme_xnvme -- common/applications.sh@12 -- # _examples_dir=/home/vagrant/spdk_repo/spdk/build/examples 00:16:19.660 18:14:30 nvme_xnvme -- common/applications.sh@14 -- # VHOST_FUZZ_APP=("$_test_app_dir/fuzz/vhost_fuzz/vhost_fuzz") 00:16:19.660 18:14:30 nvme_xnvme -- common/applications.sh@15 -- # ISCSI_APP=("$_app_dir/iscsi_tgt") 00:16:19.660 18:14:30 nvme_xnvme -- common/applications.sh@16 -- # NVMF_APP=("$_app_dir/nvmf_tgt") 00:16:19.660 18:14:30 nvme_xnvme -- common/applications.sh@17 -- # VHOST_APP=("$_app_dir/vhost") 00:16:19.660 18:14:30 nvme_xnvme -- common/applications.sh@18 -- # DD_APP=("$_app_dir/spdk_dd") 00:16:19.660 18:14:30 nvme_xnvme -- common/applications.sh@19 -- # SPDK_APP=("$_app_dir/spdk_tgt") 00:16:19.660 18:14:30 nvme_xnvme -- common/applications.sh@22 -- # [[ -e /home/vagrant/spdk_repo/spdk/include/spdk/config.h ]] 00:16:19.660 18:14:30 nvme_xnvme -- common/applications.sh@23 -- # [[ #ifndef SPDK_CONFIG_H 00:16:19.660 #define SPDK_CONFIG_H 00:16:19.660 #define SPDK_CONFIG_AIO_FSDEV 1 00:16:19.660 #define SPDK_CONFIG_APPS 1 00:16:19.660 #define SPDK_CONFIG_ARCH native 00:16:19.660 #define SPDK_CONFIG_ASAN 1 00:16:19.660 #undef SPDK_CONFIG_AVAHI 00:16:19.660 #undef SPDK_CONFIG_CET 00:16:19.660 #define SPDK_CONFIG_COPY_FILE_RANGE 1 00:16:19.660 #define SPDK_CONFIG_COVERAGE 1 00:16:19.660 #define SPDK_CONFIG_CROSS_PREFIX 00:16:19.660 #undef SPDK_CONFIG_CRYPTO 00:16:19.660 #undef SPDK_CONFIG_CRYPTO_MLX5 00:16:19.660 #undef SPDK_CONFIG_CUSTOMOCF 00:16:19.660 #undef SPDK_CONFIG_DAOS 00:16:19.660 #define SPDK_CONFIG_DAOS_DIR 00:16:19.660 #define SPDK_CONFIG_DEBUG 1 00:16:19.660 #undef SPDK_CONFIG_DPDK_COMPRESSDEV 00:16:19.660 #define SPDK_CONFIG_DPDK_DIR /home/vagrant/spdk_repo/spdk/dpdk/build 00:16:19.660 #define SPDK_CONFIG_DPDK_INC_DIR 00:16:19.660 #define SPDK_CONFIG_DPDK_LIB_DIR 00:16:19.660 #undef SPDK_CONFIG_DPDK_PKG_CONFIG 00:16:19.660 #undef SPDK_CONFIG_DPDK_UADK 00:16:19.660 #define SPDK_CONFIG_ENV /home/vagrant/spdk_repo/spdk/lib/env_dpdk 00:16:19.660 #define SPDK_CONFIG_EXAMPLES 1 00:16:19.660 #undef SPDK_CONFIG_FC 00:16:19.660 #define SPDK_CONFIG_FC_PATH 00:16:19.660 #define SPDK_CONFIG_FIO_PLUGIN 1 00:16:19.660 #define SPDK_CONFIG_FIO_SOURCE_DIR /usr/src/fio 00:16:19.660 #define SPDK_CONFIG_FSDEV 1 00:16:19.660 #undef SPDK_CONFIG_FUSE 00:16:19.660 #undef SPDK_CONFIG_FUZZER 00:16:19.660 #define SPDK_CONFIG_FUZZER_LIB 00:16:19.660 #undef SPDK_CONFIG_GOLANG 00:16:19.660 #define SPDK_CONFIG_HAVE_ARC4RANDOM 1 00:16:19.660 #define SPDK_CONFIG_HAVE_EVP_MAC 1 00:16:19.660 #define SPDK_CONFIG_HAVE_EXECINFO_H 1 00:16:19.660 #define SPDK_CONFIG_HAVE_KEYUTILS 1 00:16:19.660 #undef SPDK_CONFIG_HAVE_LIBARCHIVE 00:16:19.660 #undef SPDK_CONFIG_HAVE_LIBBSD 00:16:19.660 #undef SPDK_CONFIG_HAVE_LZ4 00:16:19.660 #define SPDK_CONFIG_HAVE_STRUCT_STAT_ST_ATIM 1 00:16:19.661 #undef SPDK_CONFIG_HAVE_STRUCT_STAT_ST_ATIMESPEC 00:16:19.661 #define SPDK_CONFIG_HAVE_UUID_GENERATE_SHA1 1 00:16:19.661 #define SPDK_CONFIG_IDXD 1 00:16:19.661 #define SPDK_CONFIG_IDXD_KERNEL 1 00:16:19.661 #undef SPDK_CONFIG_IPSEC_MB 00:16:19.661 #define SPDK_CONFIG_IPSEC_MB_DIR 00:16:19.661 #define SPDK_CONFIG_ISAL 1 00:16:19.661 #define SPDK_CONFIG_ISAL_CRYPTO 1 00:16:19.661 #define SPDK_CONFIG_ISCSI_INITIATOR 1 00:16:19.661 #define SPDK_CONFIG_LIBDIR 00:16:19.661 #undef SPDK_CONFIG_LTO 00:16:19.661 #define SPDK_CONFIG_MAX_LCORES 128 00:16:19.661 #define SPDK_CONFIG_MAX_NUMA_NODES 1 00:16:19.661 #define SPDK_CONFIG_NVME_CUSE 1 00:16:19.661 #undef SPDK_CONFIG_OCF 00:16:19.661 #define SPDK_CONFIG_OCF_PATH 00:16:19.661 #define SPDK_CONFIG_OPENSSL_PATH 00:16:19.661 #undef SPDK_CONFIG_PGO_CAPTURE 00:16:19.661 #define SPDK_CONFIG_PGO_DIR 00:16:19.661 #undef SPDK_CONFIG_PGO_USE 00:16:19.661 #define SPDK_CONFIG_PREFIX /usr/local 00:16:19.661 #undef SPDK_CONFIG_RAID5F 00:16:19.661 #undef SPDK_CONFIG_RBD 00:16:19.661 #define SPDK_CONFIG_RDMA 1 00:16:19.661 #define SPDK_CONFIG_RDMA_PROV verbs 00:16:19.661 #define SPDK_CONFIG_RDMA_SEND_WITH_INVAL 1 00:16:19.661 #define SPDK_CONFIG_RDMA_SET_ACK_TIMEOUT 1 00:16:19.661 #define SPDK_CONFIG_RDMA_SET_TOS 1 00:16:19.661 #define SPDK_CONFIG_SHARED 1 00:16:19.661 #undef SPDK_CONFIG_SMA 00:16:19.661 #define SPDK_CONFIG_TESTS 1 00:16:19.661 #undef SPDK_CONFIG_TSAN 00:16:19.661 #define SPDK_CONFIG_UBLK 1 00:16:19.661 #define SPDK_CONFIG_UBSAN 1 00:16:19.661 #undef SPDK_CONFIG_UNIT_TESTS 00:16:19.661 #undef SPDK_CONFIG_URING 00:16:19.661 #define SPDK_CONFIG_URING_PATH 00:16:19.661 #undef SPDK_CONFIG_URING_ZNS 00:16:19.661 #undef SPDK_CONFIG_USDT 00:16:19.661 #undef SPDK_CONFIG_VBDEV_COMPRESS 00:16:19.661 #undef SPDK_CONFIG_VBDEV_COMPRESS_MLX5 00:16:19.661 #undef SPDK_CONFIG_VFIO_USER 00:16:19.661 #define SPDK_CONFIG_VFIO_USER_DIR 00:16:19.661 #define SPDK_CONFIG_VHOST 1 00:16:19.661 #define SPDK_CONFIG_VIRTIO 1 00:16:19.661 #undef SPDK_CONFIG_VTUNE 00:16:19.661 #define SPDK_CONFIG_VTUNE_DIR 00:16:19.661 #define SPDK_CONFIG_WERROR 1 00:16:19.661 #define SPDK_CONFIG_WPDK_DIR 00:16:19.661 #define SPDK_CONFIG_XNVME 1 00:16:19.661 #endif /* SPDK_CONFIG_H */ == *\#\d\e\f\i\n\e\ \S\P\D\K\_\C\O\N\F\I\G\_\D\E\B\U\G* ]] 00:16:19.661 18:14:30 nvme_xnvme -- common/applications.sh@24 -- # (( SPDK_AUTOTEST_DEBUG_APPS )) 00:16:19.661 18:14:30 nvme_xnvme -- common/autotest_common.sh@55 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:16:19.661 18:14:30 nvme_xnvme -- scripts/common.sh@15 -- # shopt -s extglob 00:16:19.661 18:14:30 nvme_xnvme -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:16:19.661 18:14:30 nvme_xnvme -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:16:19.661 18:14:30 nvme_xnvme -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:16:19.661 18:14:30 nvme_xnvme -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:19.661 18:14:30 nvme_xnvme -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:19.661 18:14:30 nvme_xnvme -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:19.661 18:14:30 nvme_xnvme -- paths/export.sh@5 -- # export PATH 00:16:19.661 18:14:30 nvme_xnvme -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:19.661 18:14:30 nvme_xnvme -- common/autotest_common.sh@56 -- # source /home/vagrant/spdk_repo/spdk/scripts/perf/pm/common 00:16:19.661 18:14:30 nvme_xnvme -- pm/common@6 -- # dirname /home/vagrant/spdk_repo/spdk/scripts/perf/pm/common 00:16:19.661 18:14:30 nvme_xnvme -- pm/common@6 -- # readlink -f /home/vagrant/spdk_repo/spdk/scripts/perf/pm 00:16:19.661 18:14:30 nvme_xnvme -- pm/common@6 -- # _pmdir=/home/vagrant/spdk_repo/spdk/scripts/perf/pm 00:16:19.661 18:14:30 nvme_xnvme -- pm/common@7 -- # readlink -f /home/vagrant/spdk_repo/spdk/scripts/perf/pm/../../../ 00:16:19.661 18:14:30 nvme_xnvme -- pm/common@7 -- # _pmrootdir=/home/vagrant/spdk_repo/spdk 00:16:19.661 18:14:30 nvme_xnvme -- pm/common@64 -- # TEST_TAG=N/A 00:16:19.662 18:14:30 nvme_xnvme -- pm/common@65 -- # TEST_TAG_FILE=/home/vagrant/spdk_repo/spdk/.run_test_name 00:16:19.662 18:14:30 nvme_xnvme -- pm/common@67 -- # PM_OUTPUTDIR=/home/vagrant/spdk_repo/spdk/../output/power 00:16:19.662 18:14:30 nvme_xnvme -- pm/common@68 -- # uname -s 00:16:19.662 18:14:30 nvme_xnvme -- pm/common@68 -- # PM_OS=Linux 00:16:19.662 18:14:30 nvme_xnvme -- pm/common@70 -- # MONITOR_RESOURCES_SUDO=() 00:16:19.662 18:14:30 nvme_xnvme -- pm/common@70 -- # declare -A MONITOR_RESOURCES_SUDO 00:16:19.662 18:14:30 nvme_xnvme -- pm/common@71 -- # MONITOR_RESOURCES_SUDO["collect-bmc-pm"]=1 00:16:19.662 18:14:30 nvme_xnvme -- pm/common@72 -- # MONITOR_RESOURCES_SUDO["collect-cpu-load"]=0 00:16:19.662 18:14:30 nvme_xnvme -- pm/common@73 -- # MONITOR_RESOURCES_SUDO["collect-cpu-temp"]=0 00:16:19.662 18:14:30 nvme_xnvme -- pm/common@74 -- # MONITOR_RESOURCES_SUDO["collect-vmstat"]=0 00:16:19.662 18:14:30 nvme_xnvme -- pm/common@76 -- # SUDO[0]= 00:16:19.662 18:14:30 nvme_xnvme -- pm/common@76 -- # SUDO[1]='sudo -E' 00:16:19.662 18:14:30 nvme_xnvme -- pm/common@78 -- # MONITOR_RESOURCES=(collect-cpu-load collect-vmstat) 00:16:19.662 18:14:30 nvme_xnvme -- pm/common@79 -- # [[ Linux == FreeBSD ]] 00:16:19.662 18:14:30 nvme_xnvme -- pm/common@81 -- # [[ Linux == Linux ]] 00:16:19.662 18:14:30 nvme_xnvme -- pm/common@81 -- # [[ QEMU != QEMU ]] 00:16:19.662 18:14:30 nvme_xnvme -- pm/common@88 -- # [[ ! -d /home/vagrant/spdk_repo/spdk/../output/power ]] 00:16:19.662 18:14:30 nvme_xnvme -- common/autotest_common.sh@58 -- # : 0 00:16:19.662 18:14:30 nvme_xnvme -- common/autotest_common.sh@59 -- # export RUN_NIGHTLY 00:16:19.662 18:14:30 nvme_xnvme -- common/autotest_common.sh@62 -- # : 0 00:16:19.662 18:14:30 nvme_xnvme -- common/autotest_common.sh@63 -- # export SPDK_AUTOTEST_DEBUG_APPS 00:16:19.662 18:14:30 nvme_xnvme -- common/autotest_common.sh@64 -- # : 0 00:16:19.662 18:14:30 nvme_xnvme -- common/autotest_common.sh@65 -- # export SPDK_RUN_VALGRIND 00:16:19.662 18:14:30 nvme_xnvme -- common/autotest_common.sh@66 -- # : 1 00:16:19.662 18:14:30 nvme_xnvme -- common/autotest_common.sh@67 -- # export SPDK_RUN_FUNCTIONAL_TEST 00:16:19.662 18:14:30 nvme_xnvme -- common/autotest_common.sh@68 -- # : 0 00:16:19.662 18:14:30 nvme_xnvme -- common/autotest_common.sh@69 -- # export SPDK_TEST_UNITTEST 00:16:19.662 18:14:30 nvme_xnvme -- common/autotest_common.sh@70 -- # : 00:16:19.662 18:14:30 nvme_xnvme -- common/autotest_common.sh@71 -- # export SPDK_TEST_AUTOBUILD 00:16:19.662 18:14:30 nvme_xnvme -- common/autotest_common.sh@72 -- # : 0 00:16:19.662 18:14:30 nvme_xnvme -- common/autotest_common.sh@73 -- # export SPDK_TEST_RELEASE_BUILD 00:16:19.662 18:14:30 nvme_xnvme -- common/autotest_common.sh@74 -- # : 1 00:16:19.662 18:14:30 nvme_xnvme -- common/autotest_common.sh@75 -- # export SPDK_TEST_ISAL 00:16:19.662 18:14:30 nvme_xnvme -- common/autotest_common.sh@76 -- # : 0 00:16:19.662 18:14:30 nvme_xnvme -- common/autotest_common.sh@77 -- # export SPDK_TEST_ISCSI 00:16:19.662 18:14:30 nvme_xnvme -- common/autotest_common.sh@78 -- # : 0 00:16:19.662 18:14:30 nvme_xnvme -- common/autotest_common.sh@79 -- # export SPDK_TEST_ISCSI_INITIATOR 00:16:19.665 18:14:30 nvme_xnvme -- common/autotest_common.sh@80 -- # : 1 00:16:19.665 18:14:30 nvme_xnvme -- common/autotest_common.sh@81 -- # export SPDK_TEST_NVME 00:16:19.665 18:14:30 nvme_xnvme -- common/autotest_common.sh@82 -- # : 0 00:16:19.665 18:14:30 nvme_xnvme -- common/autotest_common.sh@83 -- # export SPDK_TEST_NVME_PMR 00:16:19.665 18:14:30 nvme_xnvme -- common/autotest_common.sh@84 -- # : 0 00:16:19.665 18:14:30 nvme_xnvme -- common/autotest_common.sh@85 -- # export SPDK_TEST_NVME_BP 00:16:19.665 18:14:30 nvme_xnvme -- common/autotest_common.sh@86 -- # : 0 00:16:19.665 18:14:30 nvme_xnvme -- common/autotest_common.sh@87 -- # export SPDK_TEST_NVME_CLI 00:16:19.665 18:14:30 nvme_xnvme -- common/autotest_common.sh@88 -- # : 0 00:16:19.665 18:14:30 nvme_xnvme -- common/autotest_common.sh@89 -- # export SPDK_TEST_NVME_CUSE 00:16:19.665 18:14:30 nvme_xnvme -- common/autotest_common.sh@90 -- # : 1 00:16:19.665 18:14:30 nvme_xnvme -- common/autotest_common.sh@91 -- # export SPDK_TEST_NVME_FDP 00:16:19.665 18:14:30 nvme_xnvme -- common/autotest_common.sh@92 -- # : 0 00:16:19.665 18:14:30 nvme_xnvme -- common/autotest_common.sh@93 -- # export SPDK_TEST_NVMF 00:16:19.665 18:14:30 nvme_xnvme -- common/autotest_common.sh@94 -- # : 0 00:16:19.665 18:14:30 nvme_xnvme -- common/autotest_common.sh@95 -- # export SPDK_TEST_VFIOUSER 00:16:19.665 18:14:30 nvme_xnvme -- common/autotest_common.sh@96 -- # : 0 00:16:19.665 18:14:30 nvme_xnvme -- common/autotest_common.sh@97 -- # export SPDK_TEST_VFIOUSER_QEMU 00:16:19.665 18:14:30 nvme_xnvme -- common/autotest_common.sh@98 -- # : 0 00:16:19.665 18:14:30 nvme_xnvme -- common/autotest_common.sh@99 -- # export SPDK_TEST_FUZZER 00:16:19.665 18:14:30 nvme_xnvme -- common/autotest_common.sh@100 -- # : 0 00:16:19.665 18:14:30 nvme_xnvme -- common/autotest_common.sh@101 -- # export SPDK_TEST_FUZZER_SHORT 00:16:19.665 18:14:30 nvme_xnvme -- common/autotest_common.sh@102 -- # : rdma 00:16:19.665 18:14:30 nvme_xnvme -- common/autotest_common.sh@103 -- # export SPDK_TEST_NVMF_TRANSPORT 00:16:19.665 18:14:30 nvme_xnvme -- common/autotest_common.sh@104 -- # : 0 00:16:19.665 18:14:30 nvme_xnvme -- common/autotest_common.sh@105 -- # export SPDK_TEST_RBD 00:16:19.665 18:14:30 nvme_xnvme -- common/autotest_common.sh@106 -- # : 0 00:16:19.665 18:14:30 nvme_xnvme -- common/autotest_common.sh@107 -- # export SPDK_TEST_VHOST 00:16:19.665 18:14:30 nvme_xnvme -- common/autotest_common.sh@108 -- # : 0 00:16:19.665 18:14:30 nvme_xnvme -- common/autotest_common.sh@109 -- # export SPDK_TEST_BLOCKDEV 00:16:19.665 18:14:30 nvme_xnvme -- common/autotest_common.sh@110 -- # : 0 00:16:19.665 18:14:30 nvme_xnvme -- common/autotest_common.sh@111 -- # export SPDK_TEST_RAID 00:16:19.665 18:14:30 nvme_xnvme -- common/autotest_common.sh@112 -- # : 0 00:16:19.665 18:14:30 nvme_xnvme -- common/autotest_common.sh@113 -- # export SPDK_TEST_IOAT 00:16:19.665 18:14:30 nvme_xnvme -- common/autotest_common.sh@114 -- # : 0 00:16:19.665 18:14:30 nvme_xnvme -- common/autotest_common.sh@115 -- # export SPDK_TEST_BLOBFS 00:16:19.665 18:14:30 nvme_xnvme -- common/autotest_common.sh@116 -- # : 0 00:16:19.665 18:14:30 nvme_xnvme -- common/autotest_common.sh@117 -- # export SPDK_TEST_VHOST_INIT 00:16:19.665 18:14:30 nvme_xnvme -- common/autotest_common.sh@118 -- # : 0 00:16:19.665 18:14:30 nvme_xnvme -- common/autotest_common.sh@119 -- # export SPDK_TEST_LVOL 00:16:19.665 18:14:30 nvme_xnvme -- common/autotest_common.sh@120 -- # : 0 00:16:19.665 18:14:30 nvme_xnvme -- common/autotest_common.sh@121 -- # export SPDK_TEST_VBDEV_COMPRESS 00:16:19.665 18:14:30 nvme_xnvme -- common/autotest_common.sh@122 -- # : 1 00:16:19.665 18:14:30 nvme_xnvme -- common/autotest_common.sh@123 -- # export SPDK_RUN_ASAN 00:16:19.665 18:14:30 nvme_xnvme -- common/autotest_common.sh@124 -- # : 1 00:16:19.665 18:14:30 nvme_xnvme -- common/autotest_common.sh@125 -- # export SPDK_RUN_UBSAN 00:16:19.665 18:14:30 nvme_xnvme -- common/autotest_common.sh@126 -- # : 00:16:19.665 18:14:30 nvme_xnvme -- common/autotest_common.sh@127 -- # export SPDK_RUN_EXTERNAL_DPDK 00:16:19.665 18:14:30 nvme_xnvme -- common/autotest_common.sh@128 -- # : 0 00:16:19.665 18:14:30 nvme_xnvme -- common/autotest_common.sh@129 -- # export SPDK_RUN_NON_ROOT 00:16:19.665 18:14:30 nvme_xnvme -- common/autotest_common.sh@130 -- # : 0 00:16:19.665 18:14:30 nvme_xnvme -- common/autotest_common.sh@131 -- # export SPDK_TEST_CRYPTO 00:16:19.665 18:14:30 nvme_xnvme -- common/autotest_common.sh@132 -- # : 1 00:16:19.665 18:14:30 nvme_xnvme -- common/autotest_common.sh@133 -- # export SPDK_TEST_FTL 00:16:19.666 18:14:30 nvme_xnvme -- common/autotest_common.sh@134 -- # : 0 00:16:19.666 18:14:30 nvme_xnvme -- common/autotest_common.sh@135 -- # export SPDK_TEST_OCF 00:16:19.666 18:14:30 nvme_xnvme -- common/autotest_common.sh@136 -- # : 0 00:16:19.666 18:14:30 nvme_xnvme -- common/autotest_common.sh@137 -- # export SPDK_TEST_VMD 00:16:19.666 18:14:30 nvme_xnvme -- common/autotest_common.sh@138 -- # : 0 00:16:19.666 18:14:30 nvme_xnvme -- common/autotest_common.sh@139 -- # export SPDK_TEST_OPAL 00:16:19.666 18:14:30 nvme_xnvme -- common/autotest_common.sh@140 -- # : 00:16:19.666 18:14:30 nvme_xnvme -- common/autotest_common.sh@141 -- # export SPDK_TEST_NATIVE_DPDK 00:16:19.666 18:14:30 nvme_xnvme -- common/autotest_common.sh@142 -- # : true 00:16:19.666 18:14:30 nvme_xnvme -- common/autotest_common.sh@143 -- # export SPDK_AUTOTEST_X 00:16:19.666 18:14:30 nvme_xnvme -- common/autotest_common.sh@144 -- # : 0 00:16:19.666 18:14:30 nvme_xnvme -- common/autotest_common.sh@145 -- # export SPDK_TEST_URING 00:16:19.666 18:14:30 nvme_xnvme -- common/autotest_common.sh@146 -- # : 0 00:16:19.666 18:14:30 nvme_xnvme -- common/autotest_common.sh@147 -- # export SPDK_TEST_USDT 00:16:19.666 18:14:30 nvme_xnvme -- common/autotest_common.sh@148 -- # : 0 00:16:19.666 18:14:30 nvme_xnvme -- common/autotest_common.sh@149 -- # export SPDK_TEST_USE_IGB_UIO 00:16:19.666 18:14:30 nvme_xnvme -- common/autotest_common.sh@150 -- # : 0 00:16:19.666 18:14:30 nvme_xnvme -- common/autotest_common.sh@151 -- # export SPDK_TEST_SCHEDULER 00:16:19.666 18:14:30 nvme_xnvme -- common/autotest_common.sh@152 -- # : 0 00:16:19.666 18:14:30 nvme_xnvme -- common/autotest_common.sh@153 -- # export SPDK_TEST_SCANBUILD 00:16:19.666 18:14:30 nvme_xnvme -- common/autotest_common.sh@154 -- # : 00:16:19.666 18:14:30 nvme_xnvme -- common/autotest_common.sh@155 -- # export SPDK_TEST_NVMF_NICS 00:16:19.666 18:14:30 nvme_xnvme -- common/autotest_common.sh@156 -- # : 0 00:16:19.666 18:14:30 nvme_xnvme -- common/autotest_common.sh@157 -- # export SPDK_TEST_SMA 00:16:19.666 18:14:30 nvme_xnvme -- common/autotest_common.sh@158 -- # : 0 00:16:19.666 18:14:30 nvme_xnvme -- common/autotest_common.sh@159 -- # export SPDK_TEST_DAOS 00:16:19.666 18:14:30 nvme_xnvme -- common/autotest_common.sh@160 -- # : 1 00:16:19.666 18:14:30 nvme_xnvme -- common/autotest_common.sh@161 -- # export SPDK_TEST_XNVME 00:16:19.666 18:14:30 nvme_xnvme -- common/autotest_common.sh@162 -- # : 0 00:16:19.666 18:14:30 nvme_xnvme -- common/autotest_common.sh@163 -- # export SPDK_TEST_ACCEL 00:16:19.666 18:14:30 nvme_xnvme -- common/autotest_common.sh@164 -- # : 0 00:16:19.666 18:14:30 nvme_xnvme -- common/autotest_common.sh@165 -- # export SPDK_TEST_ACCEL_DSA 00:16:19.666 18:14:30 nvme_xnvme -- common/autotest_common.sh@166 -- # : 0 00:16:19.666 18:14:30 nvme_xnvme -- common/autotest_common.sh@167 -- # export SPDK_TEST_ACCEL_IAA 00:16:19.666 18:14:30 nvme_xnvme -- common/autotest_common.sh@169 -- # : 00:16:19.666 18:14:30 nvme_xnvme -- common/autotest_common.sh@170 -- # export SPDK_TEST_FUZZER_TARGET 00:16:19.666 18:14:30 nvme_xnvme -- common/autotest_common.sh@171 -- # : 0 00:16:19.666 18:14:30 nvme_xnvme -- common/autotest_common.sh@172 -- # export SPDK_TEST_NVMF_MDNS 00:16:19.666 18:14:30 nvme_xnvme -- common/autotest_common.sh@173 -- # : 0 00:16:19.666 18:14:30 nvme_xnvme -- common/autotest_common.sh@174 -- # export SPDK_JSONRPC_GO_CLIENT 00:16:19.666 18:14:30 nvme_xnvme -- common/autotest_common.sh@175 -- # : 0 00:16:19.666 18:14:30 nvme_xnvme -- common/autotest_common.sh@176 -- # export SPDK_TEST_SETUP 00:16:19.666 18:14:30 nvme_xnvme -- common/autotest_common.sh@177 -- # : 0 00:16:19.666 18:14:30 nvme_xnvme -- common/autotest_common.sh@178 -- # export SPDK_TEST_NVME_INTERRUPT 00:16:19.666 18:14:30 nvme_xnvme -- common/autotest_common.sh@181 -- # export SPDK_LIB_DIR=/home/vagrant/spdk_repo/spdk/build/lib 00:16:19.666 18:14:30 nvme_xnvme -- common/autotest_common.sh@181 -- # SPDK_LIB_DIR=/home/vagrant/spdk_repo/spdk/build/lib 00:16:19.666 18:14:30 nvme_xnvme -- common/autotest_common.sh@182 -- # export DPDK_LIB_DIR=/home/vagrant/spdk_repo/spdk/dpdk/build/lib 00:16:19.666 18:14:30 nvme_xnvme -- common/autotest_common.sh@182 -- # DPDK_LIB_DIR=/home/vagrant/spdk_repo/spdk/dpdk/build/lib 00:16:19.666 18:14:30 nvme_xnvme -- common/autotest_common.sh@183 -- # export VFIO_LIB_DIR=/home/vagrant/spdk_repo/spdk/build/libvfio-user/usr/local/lib 00:16:19.666 18:14:30 nvme_xnvme -- common/autotest_common.sh@183 -- # VFIO_LIB_DIR=/home/vagrant/spdk_repo/spdk/build/libvfio-user/usr/local/lib 00:16:19.666 18:14:30 nvme_xnvme -- common/autotest_common.sh@184 -- # export LD_LIBRARY_PATH=:/home/vagrant/spdk_repo/spdk/build/lib:/home/vagrant/spdk_repo/spdk/dpdk/build/lib:/home/vagrant/spdk_repo/spdk/build/libvfio-user/usr/local/lib:/home/vagrant/spdk_repo/spdk/build/lib:/home/vagrant/spdk_repo/spdk/dpdk/build/lib:/home/vagrant/spdk_repo/spdk/build/libvfio-user/usr/local/lib:/home/vagrant/spdk_repo/spdk/build/lib:/home/vagrant/spdk_repo/spdk/dpdk/build/lib:/home/vagrant/spdk_repo/spdk/build/libvfio-user/usr/local/lib 00:16:19.666 18:14:30 nvme_xnvme -- common/autotest_common.sh@184 -- # LD_LIBRARY_PATH=:/home/vagrant/spdk_repo/spdk/build/lib:/home/vagrant/spdk_repo/spdk/dpdk/build/lib:/home/vagrant/spdk_repo/spdk/build/libvfio-user/usr/local/lib:/home/vagrant/spdk_repo/spdk/build/lib:/home/vagrant/spdk_repo/spdk/dpdk/build/lib:/home/vagrant/spdk_repo/spdk/build/libvfio-user/usr/local/lib:/home/vagrant/spdk_repo/spdk/build/lib:/home/vagrant/spdk_repo/spdk/dpdk/build/lib:/home/vagrant/spdk_repo/spdk/build/libvfio-user/usr/local/lib 00:16:19.666 18:14:30 nvme_xnvme -- common/autotest_common.sh@187 -- # export PCI_BLOCK_SYNC_ON_RESET=yes 00:16:19.666 18:14:30 nvme_xnvme -- common/autotest_common.sh@187 -- # PCI_BLOCK_SYNC_ON_RESET=yes 00:16:19.666 18:14:30 nvme_xnvme -- common/autotest_common.sh@191 -- # export PYTHONPATH=:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/python 00:16:19.666 18:14:30 nvme_xnvme -- common/autotest_common.sh@191 -- # PYTHONPATH=:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/python 00:16:19.666 18:14:30 nvme_xnvme -- common/autotest_common.sh@195 -- # export PYTHONDONTWRITEBYTECODE=1 00:16:19.666 18:14:30 nvme_xnvme -- common/autotest_common.sh@195 -- # PYTHONDONTWRITEBYTECODE=1 00:16:19.666 18:14:30 nvme_xnvme -- common/autotest_common.sh@199 -- # export ASAN_OPTIONS=new_delete_type_mismatch=0:disable_coredump=0:abort_on_error=1:use_sigaltstack=0 00:16:19.666 18:14:30 nvme_xnvme -- common/autotest_common.sh@199 -- # ASAN_OPTIONS=new_delete_type_mismatch=0:disable_coredump=0:abort_on_error=1:use_sigaltstack=0 00:16:19.666 18:14:30 nvme_xnvme -- common/autotest_common.sh@200 -- # export UBSAN_OPTIONS=halt_on_error=1:print_stacktrace=1:abort_on_error=1:disable_coredump=0:exitcode=134 00:16:19.666 18:14:30 nvme_xnvme -- common/autotest_common.sh@200 -- # UBSAN_OPTIONS=halt_on_error=1:print_stacktrace=1:abort_on_error=1:disable_coredump=0:exitcode=134 00:16:19.666 18:14:30 nvme_xnvme -- common/autotest_common.sh@204 -- # asan_suppression_file=/var/tmp/asan_suppression_file 00:16:19.666 18:14:30 nvme_xnvme -- common/autotest_common.sh@205 -- # rm -rf /var/tmp/asan_suppression_file 00:16:19.666 18:14:30 nvme_xnvme -- common/autotest_common.sh@206 -- # cat 00:16:19.928 18:14:30 nvme_xnvme -- common/autotest_common.sh@242 -- # echo leak:libfuse3.so 00:16:19.928 18:14:30 nvme_xnvme -- common/autotest_common.sh@244 -- # export LSAN_OPTIONS=suppressions=/var/tmp/asan_suppression_file 00:16:19.928 18:14:30 nvme_xnvme -- common/autotest_common.sh@244 -- # LSAN_OPTIONS=suppressions=/var/tmp/asan_suppression_file 00:16:19.928 18:14:30 nvme_xnvme -- common/autotest_common.sh@246 -- # export DEFAULT_RPC_ADDR=/var/tmp/spdk.sock 00:16:19.928 18:14:30 nvme_xnvme -- common/autotest_common.sh@246 -- # DEFAULT_RPC_ADDR=/var/tmp/spdk.sock 00:16:19.928 18:14:30 nvme_xnvme -- common/autotest_common.sh@248 -- # '[' -z /var/spdk/dependencies ']' 00:16:19.928 18:14:30 nvme_xnvme -- common/autotest_common.sh@251 -- # export DEPENDENCY_DIR 00:16:19.928 18:14:30 nvme_xnvme -- common/autotest_common.sh@255 -- # export SPDK_BIN_DIR=/home/vagrant/spdk_repo/spdk/build/bin 00:16:19.928 18:14:30 nvme_xnvme -- common/autotest_common.sh@255 -- # SPDK_BIN_DIR=/home/vagrant/spdk_repo/spdk/build/bin 00:16:19.928 18:14:30 nvme_xnvme -- common/autotest_common.sh@256 -- # export SPDK_EXAMPLE_DIR=/home/vagrant/spdk_repo/spdk/build/examples 00:16:19.928 18:14:30 nvme_xnvme -- common/autotest_common.sh@256 -- # SPDK_EXAMPLE_DIR=/home/vagrant/spdk_repo/spdk/build/examples 00:16:19.928 18:14:30 nvme_xnvme -- common/autotest_common.sh@259 -- # export QEMU_BIN=/usr/local/qemu/vanilla-latest/bin/qemu-system-x86_64 00:16:19.928 18:14:30 nvme_xnvme -- common/autotest_common.sh@259 -- # QEMU_BIN=/usr/local/qemu/vanilla-latest/bin/qemu-system-x86_64 00:16:19.928 18:14:30 nvme_xnvme -- common/autotest_common.sh@260 -- # export VFIO_QEMU_BIN=/usr/local/qemu/vfio-user-latest/bin/qemu-system-x86_64 00:16:19.928 18:14:30 nvme_xnvme -- common/autotest_common.sh@260 -- # VFIO_QEMU_BIN=/usr/local/qemu/vfio-user-latest/bin/qemu-system-x86_64 00:16:19.928 18:14:30 nvme_xnvme -- common/autotest_common.sh@262 -- # export AR_TOOL=/home/vagrant/spdk_repo/spdk/scripts/ar-xnvme-fixer 00:16:19.928 18:14:30 nvme_xnvme -- common/autotest_common.sh@262 -- # AR_TOOL=/home/vagrant/spdk_repo/spdk/scripts/ar-xnvme-fixer 00:16:19.928 18:14:30 nvme_xnvme -- common/autotest_common.sh@265 -- # export UNBIND_ENTIRE_IOMMU_GROUP=yes 00:16:19.928 18:14:30 nvme_xnvme -- common/autotest_common.sh@265 -- # UNBIND_ENTIRE_IOMMU_GROUP=yes 00:16:19.928 18:14:30 nvme_xnvme -- common/autotest_common.sh@267 -- # _LCOV_MAIN=0 00:16:19.928 18:14:30 nvme_xnvme -- common/autotest_common.sh@268 -- # _LCOV_LLVM=1 00:16:19.928 18:14:30 nvme_xnvme -- common/autotest_common.sh@269 -- # _LCOV= 00:16:19.928 18:14:30 nvme_xnvme -- common/autotest_common.sh@270 -- # [[ '' == *clang* ]] 00:16:19.928 18:14:30 nvme_xnvme -- common/autotest_common.sh@270 -- # [[ 0 -eq 1 ]] 00:16:19.928 18:14:30 nvme_xnvme -- common/autotest_common.sh@272 -- # _lcov_opt[_LCOV_LLVM]='--gcov-tool /home/vagrant/spdk_repo/spdk/test/fuzz/llvm/llvm-gcov.sh' 00:16:19.928 18:14:30 nvme_xnvme -- common/autotest_common.sh@273 -- # _lcov_opt[_LCOV_MAIN]= 00:16:19.928 18:14:30 nvme_xnvme -- common/autotest_common.sh@275 -- # lcov_opt= 00:16:19.928 18:14:30 nvme_xnvme -- common/autotest_common.sh@278 -- # '[' 0 -eq 0 ']' 00:16:19.928 18:14:30 nvme_xnvme -- common/autotest_common.sh@279 -- # export valgrind= 00:16:19.928 18:14:30 nvme_xnvme -- common/autotest_common.sh@279 -- # valgrind= 00:16:19.928 18:14:30 nvme_xnvme -- common/autotest_common.sh@285 -- # uname -s 00:16:19.928 18:14:30 nvme_xnvme -- common/autotest_common.sh@285 -- # '[' Linux = Linux ']' 00:16:19.928 18:14:30 nvme_xnvme -- common/autotest_common.sh@286 -- # HUGEMEM=4096 00:16:19.928 18:14:30 nvme_xnvme -- common/autotest_common.sh@287 -- # export CLEAR_HUGE=yes 00:16:19.928 18:14:30 nvme_xnvme -- common/autotest_common.sh@287 -- # CLEAR_HUGE=yes 00:16:19.928 18:14:30 nvme_xnvme -- common/autotest_common.sh@289 -- # MAKE=make 00:16:19.928 18:14:30 nvme_xnvme -- common/autotest_common.sh@290 -- # MAKEFLAGS=-j10 00:16:19.928 18:14:30 nvme_xnvme -- common/autotest_common.sh@306 -- # export HUGEMEM=4096 00:16:19.928 18:14:30 nvme_xnvme -- common/autotest_common.sh@306 -- # HUGEMEM=4096 00:16:19.928 18:14:30 nvme_xnvme -- common/autotest_common.sh@308 -- # NO_HUGE=() 00:16:19.928 18:14:30 nvme_xnvme -- common/autotest_common.sh@309 -- # TEST_MODE= 00:16:19.928 18:14:30 nvme_xnvme -- common/autotest_common.sh@331 -- # [[ -z 70012 ]] 00:16:19.928 18:14:30 nvme_xnvme -- common/autotest_common.sh@331 -- # kill -0 70012 00:16:19.928 18:14:30 nvme_xnvme -- common/autotest_common.sh@1696 -- # set_test_storage 2147483648 00:16:19.928 18:14:30 nvme_xnvme -- common/autotest_common.sh@341 -- # [[ -v testdir ]] 00:16:19.928 18:14:30 nvme_xnvme -- common/autotest_common.sh@343 -- # local requested_size=2147483648 00:16:19.928 18:14:30 nvme_xnvme -- common/autotest_common.sh@344 -- # local mount target_dir 00:16:19.928 18:14:30 nvme_xnvme -- common/autotest_common.sh@346 -- # local -A mounts fss sizes avails uses 00:16:19.928 18:14:30 nvme_xnvme -- common/autotest_common.sh@347 -- # local source fs size avail mount use 00:16:19.928 18:14:30 nvme_xnvme -- common/autotest_common.sh@349 -- # local storage_fallback storage_candidates 00:16:19.928 18:14:30 nvme_xnvme -- common/autotest_common.sh@351 -- # mktemp -udt spdk.XXXXXX 00:16:19.928 18:14:30 nvme_xnvme -- common/autotest_common.sh@351 -- # storage_fallback=/tmp/spdk.EgspIc 00:16:19.928 18:14:30 nvme_xnvme -- common/autotest_common.sh@356 -- # storage_candidates=("$testdir" "$storage_fallback/tests/${testdir##*/}" "$storage_fallback") 00:16:19.928 18:14:30 nvme_xnvme -- common/autotest_common.sh@358 -- # [[ -n '' ]] 00:16:19.928 18:14:30 nvme_xnvme -- common/autotest_common.sh@363 -- # [[ -n '' ]] 00:16:19.928 18:14:30 nvme_xnvme -- common/autotest_common.sh@368 -- # mkdir -p /home/vagrant/spdk_repo/spdk/test/nvme/xnvme /tmp/spdk.EgspIc/tests/xnvme /tmp/spdk.EgspIc 00:16:19.928 18:14:30 nvme_xnvme -- common/autotest_common.sh@371 -- # requested_size=2214592512 00:16:19.928 18:14:30 nvme_xnvme -- common/autotest_common.sh@373 -- # read -r source fs size use avail _ mount 00:16:19.928 18:14:30 nvme_xnvme -- common/autotest_common.sh@340 -- # df -T 00:16:19.928 18:14:30 nvme_xnvme -- common/autotest_common.sh@340 -- # grep -v Filesystem 00:16:19.928 18:14:30 nvme_xnvme -- common/autotest_common.sh@374 -- # mounts["$mount"]=/dev/vda5 00:16:19.928 18:14:30 nvme_xnvme -- common/autotest_common.sh@374 -- # fss["$mount"]=btrfs 00:16:19.928 18:14:30 nvme_xnvme -- common/autotest_common.sh@375 -- # avails["$mount"]=13975289856 00:16:19.928 18:14:30 nvme_xnvme -- common/autotest_common.sh@375 -- # sizes["$mount"]=20314062848 00:16:19.928 18:14:30 nvme_xnvme -- common/autotest_common.sh@376 -- # uses["$mount"]=5592596480 00:16:19.928 18:14:30 nvme_xnvme -- common/autotest_common.sh@373 -- # read -r source fs size use avail _ mount 00:16:19.928 18:14:30 nvme_xnvme -- common/autotest_common.sh@374 -- # mounts["$mount"]=devtmpfs 00:16:19.928 18:14:30 nvme_xnvme -- common/autotest_common.sh@374 -- # fss["$mount"]=devtmpfs 00:16:19.928 18:14:30 nvme_xnvme -- common/autotest_common.sh@375 -- # avails["$mount"]=4194304 00:16:19.928 18:14:30 nvme_xnvme -- common/autotest_common.sh@375 -- # sizes["$mount"]=4194304 00:16:19.928 18:14:30 nvme_xnvme -- common/autotest_common.sh@376 -- # uses["$mount"]=0 00:16:19.928 18:14:30 nvme_xnvme -- common/autotest_common.sh@373 -- # read -r source fs size use avail _ mount 00:16:19.928 18:14:30 nvme_xnvme -- common/autotest_common.sh@374 -- # mounts["$mount"]=tmpfs 00:16:19.929 18:14:30 nvme_xnvme -- common/autotest_common.sh@374 -- # fss["$mount"]=tmpfs 00:16:19.929 18:14:30 nvme_xnvme -- common/autotest_common.sh@375 -- # avails["$mount"]=6261657600 00:16:19.929 18:14:30 nvme_xnvme -- common/autotest_common.sh@375 -- # sizes["$mount"]=6266421248 00:16:19.929 18:14:30 nvme_xnvme -- common/autotest_common.sh@376 -- # uses["$mount"]=4763648 00:16:19.929 18:14:30 nvme_xnvme -- common/autotest_common.sh@373 -- # read -r source fs size use avail _ mount 00:16:19.929 18:14:30 nvme_xnvme -- common/autotest_common.sh@374 -- # mounts["$mount"]=tmpfs 00:16:19.929 18:14:30 nvme_xnvme -- common/autotest_common.sh@374 -- # fss["$mount"]=tmpfs 00:16:19.929 18:14:30 nvme_xnvme -- common/autotest_common.sh@375 -- # avails["$mount"]=2493775872 00:16:19.929 18:14:30 nvme_xnvme -- common/autotest_common.sh@375 -- # sizes["$mount"]=2506571776 00:16:19.929 18:14:30 nvme_xnvme -- common/autotest_common.sh@376 -- # uses["$mount"]=12795904 00:16:19.929 18:14:30 nvme_xnvme -- common/autotest_common.sh@373 -- # read -r source fs size use avail _ mount 00:16:19.929 18:14:30 nvme_xnvme -- common/autotest_common.sh@374 -- # mounts["$mount"]=/dev/vda5 00:16:19.929 18:14:30 nvme_xnvme -- common/autotest_common.sh@374 -- # fss["$mount"]=btrfs 00:16:19.929 18:14:30 nvme_xnvme -- common/autotest_common.sh@375 -- # avails["$mount"]=13975289856 00:16:19.929 18:14:30 nvme_xnvme -- common/autotest_common.sh@375 -- # sizes["$mount"]=20314062848 00:16:19.929 18:14:30 nvme_xnvme -- common/autotest_common.sh@376 -- # uses["$mount"]=5592596480 00:16:19.929 18:14:30 nvme_xnvme -- common/autotest_common.sh@373 -- # read -r source fs size use avail _ mount 00:16:19.929 18:14:30 nvme_xnvme -- common/autotest_common.sh@374 -- # mounts["$mount"]=/dev/vda2 00:16:19.929 18:14:30 nvme_xnvme -- common/autotest_common.sh@374 -- # fss["$mount"]=ext4 00:16:19.929 18:14:30 nvme_xnvme -- common/autotest_common.sh@375 -- # avails["$mount"]=840085504 00:16:19.929 18:14:30 nvme_xnvme -- common/autotest_common.sh@375 -- # sizes["$mount"]=1012768768 00:16:19.929 18:14:30 nvme_xnvme -- common/autotest_common.sh@376 -- # uses["$mount"]=103477248 00:16:19.929 18:14:30 nvme_xnvme -- common/autotest_common.sh@373 -- # read -r source fs size use avail _ mount 00:16:19.929 18:14:30 nvme_xnvme -- common/autotest_common.sh@374 -- # mounts["$mount"]=tmpfs 00:16:19.929 18:14:30 nvme_xnvme -- common/autotest_common.sh@374 -- # fss["$mount"]=tmpfs 00:16:19.929 18:14:30 nvme_xnvme -- common/autotest_common.sh@375 -- # avails["$mount"]=6266277888 00:16:19.929 18:14:30 nvme_xnvme -- common/autotest_common.sh@375 -- # sizes["$mount"]=6266425344 00:16:19.929 18:14:30 nvme_xnvme -- common/autotest_common.sh@376 -- # uses["$mount"]=147456 00:16:19.929 18:14:30 nvme_xnvme -- common/autotest_common.sh@373 -- # read -r source fs size use avail _ mount 00:16:19.929 18:14:30 nvme_xnvme -- common/autotest_common.sh@374 -- # mounts["$mount"]=/dev/vda3 00:16:19.929 18:14:30 nvme_xnvme -- common/autotest_common.sh@374 -- # fss["$mount"]=vfat 00:16:19.929 18:14:30 nvme_xnvme -- common/autotest_common.sh@375 -- # avails["$mount"]=91617280 00:16:19.929 18:14:30 nvme_xnvme -- common/autotest_common.sh@375 -- # sizes["$mount"]=104607744 00:16:19.929 18:14:30 nvme_xnvme -- common/autotest_common.sh@376 -- # uses["$mount"]=12990464 00:16:19.929 18:14:30 nvme_xnvme -- common/autotest_common.sh@373 -- # read -r source fs size use avail _ mount 00:16:19.929 18:14:30 nvme_xnvme -- common/autotest_common.sh@374 -- # mounts["$mount"]=tmpfs 00:16:19.929 18:14:30 nvme_xnvme -- common/autotest_common.sh@374 -- # fss["$mount"]=tmpfs 00:16:19.929 18:14:30 nvme_xnvme -- common/autotest_common.sh@375 -- # avails["$mount"]=1253269504 00:16:19.929 18:14:30 nvme_xnvme -- common/autotest_common.sh@375 -- # sizes["$mount"]=1253281792 00:16:19.929 18:14:30 nvme_xnvme -- common/autotest_common.sh@376 -- # uses["$mount"]=12288 00:16:19.929 18:14:30 nvme_xnvme -- common/autotest_common.sh@373 -- # read -r source fs size use avail _ mount 00:16:19.929 18:14:30 nvme_xnvme -- common/autotest_common.sh@374 -- # mounts["$mount"]=:/mnt/jenkins_nvme/jenkins/workspace/nvme-vg-autotest_2/fedora39-libvirt/output 00:16:19.929 18:14:30 nvme_xnvme -- common/autotest_common.sh@374 -- # fss["$mount"]=fuse.sshfs 00:16:19.929 18:14:30 nvme_xnvme -- common/autotest_common.sh@375 -- # avails["$mount"]=93544853504 00:16:19.929 18:14:30 nvme_xnvme -- common/autotest_common.sh@375 -- # sizes["$mount"]=105088212992 00:16:19.929 18:14:30 nvme_xnvme -- common/autotest_common.sh@376 -- # uses["$mount"]=6157926400 00:16:19.929 18:14:30 nvme_xnvme -- common/autotest_common.sh@373 -- # read -r source fs size use avail _ mount 00:16:19.929 18:14:30 nvme_xnvme -- common/autotest_common.sh@379 -- # printf '* Looking for test storage...\n' 00:16:19.929 * Looking for test storage... 00:16:19.929 18:14:30 nvme_xnvme -- common/autotest_common.sh@381 -- # local target_space new_size 00:16:19.929 18:14:30 nvme_xnvme -- common/autotest_common.sh@382 -- # for target_dir in "${storage_candidates[@]}" 00:16:19.929 18:14:30 nvme_xnvme -- common/autotest_common.sh@385 -- # df /home/vagrant/spdk_repo/spdk/test/nvme/xnvme 00:16:19.929 18:14:30 nvme_xnvme -- common/autotest_common.sh@385 -- # awk '$1 !~ /Filesystem/{print $6}' 00:16:19.929 18:14:30 nvme_xnvme -- common/autotest_common.sh@385 -- # mount=/home 00:16:19.929 18:14:30 nvme_xnvme -- common/autotest_common.sh@387 -- # target_space=13975289856 00:16:19.929 18:14:30 nvme_xnvme -- common/autotest_common.sh@388 -- # (( target_space == 0 || target_space < requested_size )) 00:16:19.929 18:14:30 nvme_xnvme -- common/autotest_common.sh@391 -- # (( target_space >= requested_size )) 00:16:19.929 18:14:30 nvme_xnvme -- common/autotest_common.sh@393 -- # [[ btrfs == tmpfs ]] 00:16:19.929 18:14:30 nvme_xnvme -- common/autotest_common.sh@393 -- # [[ btrfs == ramfs ]] 00:16:19.929 18:14:30 nvme_xnvme -- common/autotest_common.sh@393 -- # [[ /home == / ]] 00:16:19.929 18:14:30 nvme_xnvme -- common/autotest_common.sh@400 -- # export SPDK_TEST_STORAGE=/home/vagrant/spdk_repo/spdk/test/nvme/xnvme 00:16:19.929 18:14:30 nvme_xnvme -- common/autotest_common.sh@400 -- # SPDK_TEST_STORAGE=/home/vagrant/spdk_repo/spdk/test/nvme/xnvme 00:16:19.929 18:14:30 nvme_xnvme -- common/autotest_common.sh@401 -- # printf '* Found test storage at %s\n' /home/vagrant/spdk_repo/spdk/test/nvme/xnvme 00:16:19.929 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvme/xnvme 00:16:19.929 18:14:30 nvme_xnvme -- common/autotest_common.sh@402 -- # return 0 00:16:19.929 18:14:30 nvme_xnvme -- common/autotest_common.sh@1698 -- # set -o errtrace 00:16:19.929 18:14:30 nvme_xnvme -- common/autotest_common.sh@1699 -- # shopt -s extdebug 00:16:19.929 18:14:30 nvme_xnvme -- common/autotest_common.sh@1700 -- # trap 'trap - ERR; print_backtrace >&2' ERR 00:16:19.929 18:14:30 nvme_xnvme -- common/autotest_common.sh@1702 -- # PS4=' \t ${test_domain:-} -- ${BASH_SOURCE#${BASH_SOURCE%/*/*}/}@${LINENO} -- \$ ' 00:16:19.929 18:14:30 nvme_xnvme -- common/autotest_common.sh@1703 -- # true 00:16:19.929 18:14:30 nvme_xnvme -- common/autotest_common.sh@1705 -- # xtrace_fd 00:16:19.929 18:14:30 nvme_xnvme -- common/autotest_common.sh@25 -- # [[ -n 13 ]] 00:16:19.929 18:14:30 nvme_xnvme -- common/autotest_common.sh@25 -- # [[ -e /proc/self/fd/13 ]] 00:16:19.929 18:14:30 nvme_xnvme -- common/autotest_common.sh@27 -- # exec 00:16:19.929 18:14:30 nvme_xnvme -- common/autotest_common.sh@29 -- # exec 00:16:19.929 18:14:30 nvme_xnvme -- common/autotest_common.sh@31 -- # xtrace_restore 00:16:19.929 18:14:30 nvme_xnvme -- common/autotest_common.sh@16 -- # unset -v 'X_STACK[0 - 1 < 0 ? 0 : 0 - 1]' 00:16:19.929 18:14:30 nvme_xnvme -- common/autotest_common.sh@17 -- # (( 0 == 0 )) 00:16:19.929 18:14:30 nvme_xnvme -- common/autotest_common.sh@18 -- # set -x 00:16:19.929 18:14:30 nvme_xnvme -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:16:19.929 18:14:30 nvme_xnvme -- common/autotest_common.sh@1711 -- # lcov --version 00:16:19.929 18:14:30 nvme_xnvme -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:16:19.929 18:14:30 nvme_xnvme -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:16:19.929 18:14:30 nvme_xnvme -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:16:19.929 18:14:30 nvme_xnvme -- scripts/common.sh@333 -- # local ver1 ver1_l 00:16:19.929 18:14:30 nvme_xnvme -- scripts/common.sh@334 -- # local ver2 ver2_l 00:16:19.929 18:14:30 nvme_xnvme -- scripts/common.sh@336 -- # IFS=.-: 00:16:19.929 18:14:30 nvme_xnvme -- scripts/common.sh@336 -- # read -ra ver1 00:16:19.929 18:14:30 nvme_xnvme -- scripts/common.sh@337 -- # IFS=.-: 00:16:19.929 18:14:30 nvme_xnvme -- scripts/common.sh@337 -- # read -ra ver2 00:16:19.929 18:14:30 nvme_xnvme -- scripts/common.sh@338 -- # local 'op=<' 00:16:19.929 18:14:30 nvme_xnvme -- scripts/common.sh@340 -- # ver1_l=2 00:16:19.929 18:14:30 nvme_xnvme -- scripts/common.sh@341 -- # ver2_l=1 00:16:19.929 18:14:30 nvme_xnvme -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:16:19.929 18:14:30 nvme_xnvme -- scripts/common.sh@344 -- # case "$op" in 00:16:19.929 18:14:30 nvme_xnvme -- scripts/common.sh@345 -- # : 1 00:16:19.929 18:14:30 nvme_xnvme -- scripts/common.sh@364 -- # (( v = 0 )) 00:16:19.929 18:14:30 nvme_xnvme -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:16:19.929 18:14:30 nvme_xnvme -- scripts/common.sh@365 -- # decimal 1 00:16:19.929 18:14:30 nvme_xnvme -- scripts/common.sh@353 -- # local d=1 00:16:19.929 18:14:30 nvme_xnvme -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:16:19.929 18:14:30 nvme_xnvme -- scripts/common.sh@355 -- # echo 1 00:16:19.929 18:14:30 nvme_xnvme -- scripts/common.sh@365 -- # ver1[v]=1 00:16:19.929 18:14:30 nvme_xnvme -- scripts/common.sh@366 -- # decimal 2 00:16:19.929 18:14:30 nvme_xnvme -- scripts/common.sh@353 -- # local d=2 00:16:19.929 18:14:30 nvme_xnvme -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:16:19.929 18:14:30 nvme_xnvme -- scripts/common.sh@355 -- # echo 2 00:16:19.929 18:14:30 nvme_xnvme -- scripts/common.sh@366 -- # ver2[v]=2 00:16:19.929 18:14:30 nvme_xnvme -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:16:19.929 18:14:30 nvme_xnvme -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:16:19.929 18:14:30 nvme_xnvme -- scripts/common.sh@368 -- # return 0 00:16:19.929 18:14:30 nvme_xnvme -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:16:19.929 18:14:30 nvme_xnvme -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:16:19.929 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:16:19.929 --rc genhtml_branch_coverage=1 00:16:19.929 --rc genhtml_function_coverage=1 00:16:19.929 --rc genhtml_legend=1 00:16:19.929 --rc geninfo_all_blocks=1 00:16:19.929 --rc geninfo_unexecuted_blocks=1 00:16:19.929 00:16:19.929 ' 00:16:19.929 18:14:30 nvme_xnvme -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:16:19.929 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:16:19.929 --rc genhtml_branch_coverage=1 00:16:19.929 --rc genhtml_function_coverage=1 00:16:19.929 --rc genhtml_legend=1 00:16:19.929 --rc geninfo_all_blocks=1 00:16:19.929 --rc geninfo_unexecuted_blocks=1 00:16:19.929 00:16:19.929 ' 00:16:19.929 18:14:30 nvme_xnvme -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:16:19.929 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:16:19.929 --rc genhtml_branch_coverage=1 00:16:19.929 --rc genhtml_function_coverage=1 00:16:19.929 --rc genhtml_legend=1 00:16:19.929 --rc geninfo_all_blocks=1 00:16:19.929 --rc geninfo_unexecuted_blocks=1 00:16:19.929 00:16:19.929 ' 00:16:19.929 18:14:30 nvme_xnvme -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:16:19.929 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:16:19.929 --rc genhtml_branch_coverage=1 00:16:19.929 --rc genhtml_function_coverage=1 00:16:19.930 --rc genhtml_legend=1 00:16:19.930 --rc geninfo_all_blocks=1 00:16:19.930 --rc geninfo_unexecuted_blocks=1 00:16:19.930 00:16:19.930 ' 00:16:19.930 18:14:30 nvme_xnvme -- dd/common.sh@7 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:16:19.930 18:14:30 nvme_xnvme -- scripts/common.sh@15 -- # shopt -s extglob 00:16:19.930 18:14:30 nvme_xnvme -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:16:19.930 18:14:30 nvme_xnvme -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:16:19.930 18:14:30 nvme_xnvme -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:16:19.930 18:14:30 nvme_xnvme -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:19.930 18:14:30 nvme_xnvme -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:19.930 18:14:30 nvme_xnvme -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:19.930 18:14:30 nvme_xnvme -- paths/export.sh@5 -- # export PATH 00:16:19.930 18:14:30 nvme_xnvme -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:19.930 18:14:30 nvme_xnvme -- xnvme/common.sh@12 -- # xnvme_io=('libaio' 'io_uring' 'io_uring_cmd') 00:16:19.930 18:14:30 nvme_xnvme -- xnvme/common.sh@12 -- # declare -a xnvme_io 00:16:19.930 18:14:30 nvme_xnvme -- xnvme/common.sh@18 -- # libaio=('randread' 'randwrite') 00:16:19.930 18:14:30 nvme_xnvme -- xnvme/common.sh@18 -- # declare -a libaio 00:16:19.930 18:14:30 nvme_xnvme -- xnvme/common.sh@23 -- # io_uring=('randread' 'randwrite') 00:16:19.930 18:14:30 nvme_xnvme -- xnvme/common.sh@23 -- # declare -a io_uring 00:16:19.930 18:14:30 nvme_xnvme -- xnvme/common.sh@27 -- # io_uring_cmd=('randread' 'randwrite' 'unmap' 'write_zeroes') 00:16:19.930 18:14:30 nvme_xnvme -- xnvme/common.sh@27 -- # declare -a io_uring_cmd 00:16:19.930 18:14:30 nvme_xnvme -- xnvme/common.sh@33 -- # libaio_fio=('randread' 'randwrite') 00:16:19.930 18:14:30 nvme_xnvme -- xnvme/common.sh@33 -- # declare -a libaio_fio 00:16:19.930 18:14:30 nvme_xnvme -- xnvme/common.sh@37 -- # io_uring_fio=('randread' 'randwrite') 00:16:19.930 18:14:30 nvme_xnvme -- xnvme/common.sh@37 -- # declare -a io_uring_fio 00:16:19.930 18:14:30 nvme_xnvme -- xnvme/common.sh@41 -- # io_uring_cmd_fio=('randread' 'randwrite') 00:16:19.930 18:14:30 nvme_xnvme -- xnvme/common.sh@41 -- # declare -a io_uring_cmd_fio 00:16:19.930 18:14:30 nvme_xnvme -- xnvme/common.sh@45 -- # xnvme_filename=(['libaio']='/dev/nvme0n1' ['io_uring']='/dev/nvme0n1' ['io_uring_cmd']='/dev/ng0n1') 00:16:19.930 18:14:30 nvme_xnvme -- xnvme/common.sh@45 -- # declare -A xnvme_filename 00:16:19.930 18:14:30 nvme_xnvme -- xnvme/common.sh@51 -- # xnvme_conserve_cpu=('false' 'true') 00:16:19.930 18:14:30 nvme_xnvme -- xnvme/common.sh@51 -- # declare -a xnvme_conserve_cpu 00:16:19.930 18:14:30 nvme_xnvme -- xnvme/common.sh@57 -- # method_bdev_xnvme_create_0=(['name']='xnvme_bdev' ['filename']='/dev/nvme0n1' ['io_mechanism']='libaio' ['conserve_cpu']='false') 00:16:19.930 18:14:30 nvme_xnvme -- xnvme/common.sh@57 -- # declare -A method_bdev_xnvme_create_0 00:16:19.930 18:14:30 nvme_xnvme -- xnvme/common.sh@89 -- # prep_nvme 00:16:19.930 18:14:30 nvme_xnvme -- xnvme/common.sh@70 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:16:20.513 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:16:20.771 Waiting for block devices as requested 00:16:20.771 0000:00:11.0 (1b36 0010): uio_pci_generic -> nvme 00:16:21.030 0000:00:10.0 (1b36 0010): uio_pci_generic -> nvme 00:16:21.030 0000:00:12.0 (1b36 0010): uio_pci_generic -> nvme 00:16:21.030 0000:00:13.0 (1b36 0010): uio_pci_generic -> nvme 00:16:26.302 * Events for some block/disk devices (0000:00:13.0) were not caught, they may be missing 00:16:26.302 18:14:36 nvme_xnvme -- xnvme/common.sh@73 -- # modprobe -r nvme 00:16:26.561 18:14:37 nvme_xnvme -- xnvme/common.sh@74 -- # nproc 00:16:26.561 18:14:37 nvme_xnvme -- xnvme/common.sh@74 -- # modprobe nvme poll_queues=10 00:16:26.820 18:14:37 nvme_xnvme -- xnvme/common.sh@77 -- # local nvme 00:16:26.820 18:14:37 nvme_xnvme -- xnvme/common.sh@78 -- # for nvme in /dev/nvme*n!(*p*) 00:16:26.820 18:14:37 nvme_xnvme -- xnvme/common.sh@79 -- # block_in_use /dev/nvme0n1 00:16:26.820 18:14:37 nvme_xnvme -- scripts/common.sh@381 -- # local block=/dev/nvme0n1 pt 00:16:26.820 18:14:37 nvme_xnvme -- scripts/common.sh@390 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py /dev/nvme0n1 00:16:26.820 No valid GPT data, bailing 00:16:26.820 18:14:37 nvme_xnvme -- scripts/common.sh@394 -- # blkid -s PTTYPE -o value /dev/nvme0n1 00:16:26.820 18:14:37 nvme_xnvme -- scripts/common.sh@394 -- # pt= 00:16:26.820 18:14:37 nvme_xnvme -- scripts/common.sh@395 -- # return 1 00:16:26.820 18:14:37 nvme_xnvme -- xnvme/common.sh@80 -- # xnvme_filename["libaio"]=/dev/nvme0n1 00:16:26.820 18:14:37 nvme_xnvme -- xnvme/common.sh@81 -- # xnvme_filename["io_uring"]=/dev/nvme0n1 00:16:26.820 18:14:37 nvme_xnvme -- xnvme/common.sh@82 -- # xnvme_filename["io_uring_cmd"]=/dev/ng0n1 00:16:26.820 18:14:37 nvme_xnvme -- xnvme/common.sh@83 -- # return 0 00:16:26.820 18:14:37 nvme_xnvme -- xnvme/xnvme.sh@73 -- # trap 'killprocess "$spdk_tgt"' EXIT 00:16:26.820 18:14:37 nvme_xnvme -- xnvme/xnvme.sh@75 -- # for io in "${xnvme_io[@]}" 00:16:26.820 18:14:37 nvme_xnvme -- xnvme/xnvme.sh@76 -- # method_bdev_xnvme_create_0["io_mechanism"]=libaio 00:16:26.820 18:14:37 nvme_xnvme -- xnvme/xnvme.sh@77 -- # method_bdev_xnvme_create_0["filename"]=/dev/nvme0n1 00:16:26.820 18:14:37 nvme_xnvme -- xnvme/xnvme.sh@79 -- # filename=/dev/nvme0n1 00:16:26.820 18:14:37 nvme_xnvme -- xnvme/xnvme.sh@80 -- # name=xnvme_bdev 00:16:26.820 18:14:37 nvme_xnvme -- xnvme/xnvme.sh@82 -- # for cc in "${xnvme_conserve_cpu[@]}" 00:16:26.820 18:14:37 nvme_xnvme -- xnvme/xnvme.sh@83 -- # method_bdev_xnvme_create_0["conserve_cpu"]=false 00:16:26.820 18:14:37 nvme_xnvme -- xnvme/xnvme.sh@84 -- # conserve_cpu=false 00:16:26.820 18:14:37 nvme_xnvme -- xnvme/xnvme.sh@86 -- # run_test xnvme_rpc xnvme_rpc 00:16:26.820 18:14:37 nvme_xnvme -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:16:26.820 18:14:37 nvme_xnvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:16:26.820 18:14:37 nvme_xnvme -- common/autotest_common.sh@10 -- # set +x 00:16:26.820 ************************************ 00:16:26.820 START TEST xnvme_rpc 00:16:26.820 ************************************ 00:16:26.820 18:14:37 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@1129 -- # xnvme_rpc 00:16:26.820 18:14:37 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@48 -- # cc=() 00:16:26.820 18:14:37 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@48 -- # local -A cc 00:16:26.820 18:14:37 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@50 -- # cc["false"]= 00:16:26.820 18:14:37 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@50 -- # cc["true"]=-c 00:16:26.820 18:14:37 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@53 -- # spdk_tgt=70415 00:16:26.820 18:14:37 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@54 -- # waitforlisten 70415 00:16:26.820 18:14:37 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@52 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:16:26.820 18:14:37 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@835 -- # '[' -z 70415 ']' 00:16:26.820 18:14:37 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:16:26.820 18:14:37 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@840 -- # local max_retries=100 00:16:26.820 18:14:37 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:16:26.820 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:16:26.820 18:14:37 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@844 -- # xtrace_disable 00:16:26.820 18:14:37 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@10 -- # set +x 00:16:27.079 [2024-12-06 18:14:37.459836] Starting SPDK v25.01-pre git sha1 1148849d6 / DPDK 24.03.0 initialization... 00:16:27.079 [2024-12-06 18:14:37.459960] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid70415 ] 00:16:27.079 [2024-12-06 18:14:37.639406] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:16:27.336 [2024-12-06 18:14:37.759043] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:16:28.270 18:14:38 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:16:28.270 18:14:38 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@868 -- # return 0 00:16:28.270 18:14:38 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@56 -- # rpc_cmd bdev_xnvme_create /dev/nvme0n1 xnvme_bdev libaio '' 00:16:28.270 18:14:38 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:28.270 18:14:38 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@10 -- # set +x 00:16:28.270 xnvme_bdev 00:16:28.270 18:14:38 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:28.270 18:14:38 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@62 -- # rpc_xnvme name 00:16:28.270 18:14:38 nvme_xnvme.xnvme_rpc -- xnvme/common.sh@65 -- # rpc_cmd framework_get_config bdev 00:16:28.270 18:14:38 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:28.270 18:14:38 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@10 -- # set +x 00:16:28.270 18:14:38 nvme_xnvme.xnvme_rpc -- xnvme/common.sh@66 -- # jq -r '.[] | select(.method == "bdev_xnvme_create").params.name' 00:16:28.270 18:14:38 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:28.270 18:14:38 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@62 -- # [[ xnvme_bdev == \x\n\v\m\e\_\b\d\e\v ]] 00:16:28.270 18:14:38 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@63 -- # rpc_xnvme filename 00:16:28.270 18:14:38 nvme_xnvme.xnvme_rpc -- xnvme/common.sh@65 -- # rpc_cmd framework_get_config bdev 00:16:28.270 18:14:38 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:28.270 18:14:38 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@10 -- # set +x 00:16:28.270 18:14:38 nvme_xnvme.xnvme_rpc -- xnvme/common.sh@66 -- # jq -r '.[] | select(.method == "bdev_xnvme_create").params.filename' 00:16:28.270 18:14:38 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:28.270 18:14:38 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@63 -- # [[ /dev/nvme0n1 == \/\d\e\v\/\n\v\m\e\0\n\1 ]] 00:16:28.270 18:14:38 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@64 -- # rpc_xnvme io_mechanism 00:16:28.270 18:14:38 nvme_xnvme.xnvme_rpc -- xnvme/common.sh@66 -- # jq -r '.[] | select(.method == "bdev_xnvme_create").params.io_mechanism' 00:16:28.270 18:14:38 nvme_xnvme.xnvme_rpc -- xnvme/common.sh@65 -- # rpc_cmd framework_get_config bdev 00:16:28.270 18:14:38 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:28.270 18:14:38 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@10 -- # set +x 00:16:28.270 18:14:38 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:28.270 18:14:38 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@64 -- # [[ libaio == \l\i\b\a\i\o ]] 00:16:28.270 18:14:38 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@65 -- # rpc_xnvme conserve_cpu 00:16:28.270 18:14:38 nvme_xnvme.xnvme_rpc -- xnvme/common.sh@65 -- # rpc_cmd framework_get_config bdev 00:16:28.270 18:14:38 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:28.270 18:14:38 nvme_xnvme.xnvme_rpc -- xnvme/common.sh@66 -- # jq -r '.[] | select(.method == "bdev_xnvme_create").params.conserve_cpu' 00:16:28.270 18:14:38 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@10 -- # set +x 00:16:28.270 18:14:38 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:28.528 18:14:38 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@65 -- # [[ false == \f\a\l\s\e ]] 00:16:28.528 18:14:38 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@67 -- # rpc_cmd bdev_xnvme_delete xnvme_bdev 00:16:28.528 18:14:38 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:28.528 18:14:38 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@10 -- # set +x 00:16:28.528 18:14:38 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:28.528 18:14:38 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@70 -- # killprocess 70415 00:16:28.528 18:14:38 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@954 -- # '[' -z 70415 ']' 00:16:28.528 18:14:38 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@958 -- # kill -0 70415 00:16:28.528 18:14:38 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@959 -- # uname 00:16:28.528 18:14:38 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:16:28.528 18:14:38 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 70415 00:16:28.528 18:14:38 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:16:28.528 18:14:38 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:16:28.528 killing process with pid 70415 00:16:28.528 18:14:38 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@972 -- # echo 'killing process with pid 70415' 00:16:28.528 18:14:38 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@973 -- # kill 70415 00:16:28.528 18:14:38 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@978 -- # wait 70415 00:16:31.054 00:16:31.054 real 0m3.962s 00:16:31.054 user 0m3.994s 00:16:31.054 sys 0m0.505s 00:16:31.054 ************************************ 00:16:31.054 END TEST xnvme_rpc 00:16:31.054 ************************************ 00:16:31.054 18:14:41 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@1130 -- # xtrace_disable 00:16:31.054 18:14:41 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@10 -- # set +x 00:16:31.054 18:14:41 nvme_xnvme -- xnvme/xnvme.sh@87 -- # run_test xnvme_bdevperf xnvme_bdevperf 00:16:31.054 18:14:41 nvme_xnvme -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:16:31.054 18:14:41 nvme_xnvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:16:31.054 18:14:41 nvme_xnvme -- common/autotest_common.sh@10 -- # set +x 00:16:31.054 ************************************ 00:16:31.054 START TEST xnvme_bdevperf 00:16:31.054 ************************************ 00:16:31.054 18:14:41 nvme_xnvme.xnvme_bdevperf -- common/autotest_common.sh@1129 -- # xnvme_bdevperf 00:16:31.054 18:14:41 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@12 -- # local io_pattern 00:16:31.054 18:14:41 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@13 -- # local -n io_pattern_ref=libaio 00:16:31.054 18:14:41 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@15 -- # for io_pattern in "${io_pattern_ref[@]}" 00:16:31.054 18:14:41 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@17 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /dev/fd/62 -q 64 -w randread -t 5 -T xnvme_bdev -o 4096 00:16:31.054 18:14:41 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@17 -- # gen_conf 00:16:31.054 18:14:41 nvme_xnvme.xnvme_bdevperf -- dd/common.sh@31 -- # xtrace_disable 00:16:31.054 18:14:41 nvme_xnvme.xnvme_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:16:31.054 { 00:16:31.054 "subsystems": [ 00:16:31.054 { 00:16:31.054 "subsystem": "bdev", 00:16:31.054 "config": [ 00:16:31.054 { 00:16:31.054 "params": { 00:16:31.054 "io_mechanism": "libaio", 00:16:31.054 "conserve_cpu": false, 00:16:31.054 "filename": "/dev/nvme0n1", 00:16:31.054 "name": "xnvme_bdev" 00:16:31.054 }, 00:16:31.054 "method": "bdev_xnvme_create" 00:16:31.054 }, 00:16:31.054 { 00:16:31.054 "method": "bdev_wait_for_examine" 00:16:31.054 } 00:16:31.054 ] 00:16:31.054 } 00:16:31.054 ] 00:16:31.054 } 00:16:31.054 [2024-12-06 18:14:41.466964] Starting SPDK v25.01-pre git sha1 1148849d6 / DPDK 24.03.0 initialization... 00:16:31.054 [2024-12-06 18:14:41.467086] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid70495 ] 00:16:31.344 [2024-12-06 18:14:41.647568] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:16:31.344 [2024-12-06 18:14:41.761096] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:16:31.614 Running I/O for 5 seconds... 00:16:33.929 42894.00 IOPS, 167.55 MiB/s [2024-12-06T18:14:45.459Z] 42482.00 IOPS, 165.95 MiB/s [2024-12-06T18:14:46.396Z] 41951.33 IOPS, 163.87 MiB/s [2024-12-06T18:14:47.333Z] 42434.25 IOPS, 165.76 MiB/s 00:16:36.757 Latency(us) 00:16:36.757 [2024-12-06T18:14:47.333Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:16:36.757 Job: xnvme_bdev (Core Mask 0x1, workload: randread, depth: 64, IO size: 4096) 00:16:36.757 xnvme_bdev : 5.00 42316.57 165.30 0.00 0.00 1509.30 156.27 6606.24 00:16:36.757 [2024-12-06T18:14:47.333Z] =================================================================================================================== 00:16:36.757 [2024-12-06T18:14:47.333Z] Total : 42316.57 165.30 0.00 0.00 1509.30 156.27 6606.24 00:16:38.133 18:14:48 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@15 -- # for io_pattern in "${io_pattern_ref[@]}" 00:16:38.133 18:14:48 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@17 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /dev/fd/62 -q 64 -w randwrite -t 5 -T xnvme_bdev -o 4096 00:16:38.133 18:14:48 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@17 -- # gen_conf 00:16:38.133 18:14:48 nvme_xnvme.xnvme_bdevperf -- dd/common.sh@31 -- # xtrace_disable 00:16:38.133 18:14:48 nvme_xnvme.xnvme_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:16:38.133 { 00:16:38.133 "subsystems": [ 00:16:38.133 { 00:16:38.133 "subsystem": "bdev", 00:16:38.133 "config": [ 00:16:38.133 { 00:16:38.133 "params": { 00:16:38.133 "io_mechanism": "libaio", 00:16:38.133 "conserve_cpu": false, 00:16:38.133 "filename": "/dev/nvme0n1", 00:16:38.133 "name": "xnvme_bdev" 00:16:38.133 }, 00:16:38.133 "method": "bdev_xnvme_create" 00:16:38.133 }, 00:16:38.133 { 00:16:38.133 "method": "bdev_wait_for_examine" 00:16:38.133 } 00:16:38.133 ] 00:16:38.133 } 00:16:38.133 ] 00:16:38.133 } 00:16:38.133 [2024-12-06 18:14:48.484236] Starting SPDK v25.01-pre git sha1 1148849d6 / DPDK 24.03.0 initialization... 00:16:38.133 [2024-12-06 18:14:48.484386] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid70581 ] 00:16:38.133 [2024-12-06 18:14:48.669704] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:16:38.393 [2024-12-06 18:14:48.813599] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:16:38.960 Running I/O for 5 seconds... 00:16:40.962 48187.00 IOPS, 188.23 MiB/s [2024-12-06T18:14:52.473Z] 41356.50 IOPS, 161.55 MiB/s [2024-12-06T18:14:53.404Z] 40177.00 IOPS, 156.94 MiB/s [2024-12-06T18:14:54.336Z] 40490.25 IOPS, 158.17 MiB/s [2024-12-06T18:14:54.336Z] 40232.60 IOPS, 157.16 MiB/s 00:16:43.760 Latency(us) 00:16:43.760 [2024-12-06T18:14:54.336Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:16:43.760 Job: xnvme_bdev (Core Mask 0x1, workload: randwrite, depth: 64, IO size: 4096) 00:16:43.760 xnvme_bdev : 5.00 40215.32 157.09 0.00 0.00 1587.74 180.13 6132.49 00:16:43.760 [2024-12-06T18:14:54.336Z] =================================================================================================================== 00:16:43.760 [2024-12-06T18:14:54.336Z] Total : 40215.32 157.09 0.00 0.00 1587.74 180.13 6132.49 00:16:45.134 00:16:45.134 real 0m14.049s 00:16:45.134 user 0m5.317s 00:16:45.134 sys 0m6.133s 00:16:45.134 18:14:55 nvme_xnvme.xnvme_bdevperf -- common/autotest_common.sh@1130 -- # xtrace_disable 00:16:45.134 18:14:55 nvme_xnvme.xnvme_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:16:45.134 ************************************ 00:16:45.134 END TEST xnvme_bdevperf 00:16:45.134 ************************************ 00:16:45.134 18:14:55 nvme_xnvme -- xnvme/xnvme.sh@88 -- # run_test xnvme_fio_plugin xnvme_fio_plugin 00:16:45.134 18:14:55 nvme_xnvme -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:16:45.134 18:14:55 nvme_xnvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:16:45.134 18:14:55 nvme_xnvme -- common/autotest_common.sh@10 -- # set +x 00:16:45.134 ************************************ 00:16:45.134 START TEST xnvme_fio_plugin 00:16:45.134 ************************************ 00:16:45.134 18:14:55 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1129 -- # xnvme_fio_plugin 00:16:45.134 18:14:55 nvme_xnvme.xnvme_fio_plugin -- xnvme/xnvme.sh@27 -- # local io_pattern 00:16:45.134 18:14:55 nvme_xnvme.xnvme_fio_plugin -- xnvme/xnvme.sh@28 -- # local -n io_pattern_ref=libaio_fio 00:16:45.134 18:14:55 nvme_xnvme.xnvme_fio_plugin -- xnvme/xnvme.sh@30 -- # for io_pattern in "${io_pattern_ref[@]}" 00:16:45.134 18:14:55 nvme_xnvme.xnvme_fio_plugin -- xnvme/xnvme.sh@32 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf=/dev/fd/62 --filename=xnvme_bdev --direct=1 --bs=4k --iodepth=64 --numjobs=1 --rw=randread --time_based --runtime=5 --thread=1 --name xnvme_bdev 00:16:45.134 18:14:55 nvme_xnvme.xnvme_fio_plugin -- xnvme/xnvme.sh@32 -- # gen_conf 00:16:45.134 18:14:55 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1360 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf=/dev/fd/62 --filename=xnvme_bdev --direct=1 --bs=4k --iodepth=64 --numjobs=1 --rw=randread --time_based --runtime=5 --thread=1 --name xnvme_bdev 00:16:45.134 18:14:55 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1341 -- # local fio_dir=/usr/src/fio 00:16:45.134 18:14:55 nvme_xnvme.xnvme_fio_plugin -- dd/common.sh@31 -- # xtrace_disable 00:16:45.134 18:14:55 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1343 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:16:45.134 18:14:55 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@10 -- # set +x 00:16:45.134 18:14:55 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1343 -- # local sanitizers 00:16:45.134 18:14:55 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1344 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:16:45.134 18:14:55 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1345 -- # shift 00:16:45.134 18:14:55 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1347 -- # local asan_lib= 00:16:45.134 18:14:55 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 00:16:45.134 18:14:55 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1349 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:16:45.134 18:14:55 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1349 -- # grep libasan 00:16:45.134 18:14:55 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1349 -- # awk '{print $3}' 00:16:45.134 { 00:16:45.134 "subsystems": [ 00:16:45.134 { 00:16:45.134 "subsystem": "bdev", 00:16:45.134 "config": [ 00:16:45.134 { 00:16:45.134 "params": { 00:16:45.134 "io_mechanism": "libaio", 00:16:45.134 "conserve_cpu": false, 00:16:45.134 "filename": "/dev/nvme0n1", 00:16:45.134 "name": "xnvme_bdev" 00:16:45.134 }, 00:16:45.134 "method": "bdev_xnvme_create" 00:16:45.134 }, 00:16:45.134 { 00:16:45.134 "method": "bdev_wait_for_examine" 00:16:45.134 } 00:16:45.134 ] 00:16:45.134 } 00:16:45.134 ] 00:16:45.134 } 00:16:45.134 18:14:55 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1349 -- # asan_lib=/usr/lib64/libasan.so.8 00:16:45.134 18:14:55 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1350 -- # [[ -n /usr/lib64/libasan.so.8 ]] 00:16:45.134 18:14:55 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1351 -- # break 00:16:45.134 18:14:55 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1356 -- # LD_PRELOAD='/usr/lib64/libasan.so.8 /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev' 00:16:45.134 18:14:55 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1356 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf=/dev/fd/62 --filename=xnvme_bdev --direct=1 --bs=4k --iodepth=64 --numjobs=1 --rw=randread --time_based --runtime=5 --thread=1 --name xnvme_bdev 00:16:45.392 xnvme_bdev: (g=0): rw=randread, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=64 00:16:45.392 fio-3.35 00:16:45.392 Starting 1 thread 00:16:51.951 00:16:51.951 xnvme_bdev: (groupid=0, jobs=1): err= 0: pid=70709: Fri Dec 6 18:15:01 2024 00:16:51.951 read: IOPS=43.5k, BW=170MiB/s (178MB/s)(849MiB/5001msec) 00:16:51.951 slat (usec): min=4, max=691, avg=20.17, stdev=22.87 00:16:51.951 clat (usec): min=89, max=7398, avg=865.66, stdev=551.43 00:16:51.951 lat (usec): min=138, max=7430, avg=885.83, stdev=555.75 00:16:51.951 clat percentiles (usec): 00:16:51.951 | 1.00th=[ 176], 5.00th=[ 253], 10.00th=[ 326], 20.00th=[ 449], 00:16:51.951 | 30.00th=[ 562], 40.00th=[ 676], 50.00th=[ 783], 60.00th=[ 889], 00:16:51.951 | 70.00th=[ 1012], 80.00th=[ 1156], 90.00th=[ 1385], 95.00th=[ 1762], 00:16:51.951 | 99.00th=[ 3195], 99.50th=[ 3752], 99.90th=[ 4621], 99.95th=[ 4883], 00:16:51.951 | 99.99th=[ 5604] 00:16:51.951 bw ( KiB/s): min=139184, max=187432, per=99.04%, avg=172158.22, stdev=15534.50, samples=9 00:16:51.951 iops : min=34796, max=46858, avg=43039.56, stdev=3883.62, samples=9 00:16:51.951 lat (usec) : 100=0.02%, 250=4.74%, 500=19.57%, 750=22.73%, 1000=22.14% 00:16:51.951 lat (msec) : 2=27.06%, 4=3.37%, 10=0.36% 00:16:51.951 cpu : usr=24.38%, sys=53.04%, ctx=241, majf=0, minf=764 00:16:51.951 IO depths : 1=0.1%, 2=1.1%, 4=4.2%, 8=11.0%, 16=26.1%, 32=55.9%, >=64=1.8% 00:16:51.951 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:16:51.951 complete : 0=0.0%, 4=98.3%, 8=0.1%, 16=0.1%, 32=0.1%, 64=1.7%, >=64=0.0% 00:16:51.951 issued rwts: total=217326,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:16:51.951 latency : target=0, window=0, percentile=100.00%, depth=64 00:16:51.951 00:16:51.951 Run status group 0 (all jobs): 00:16:51.951 READ: bw=170MiB/s (178MB/s), 170MiB/s-170MiB/s (178MB/s-178MB/s), io=849MiB (890MB), run=5001-5001msec 00:16:52.517 ----------------------------------------------------- 00:16:52.517 Suppressions used: 00:16:52.517 count bytes template 00:16:52.517 1 11 /usr/src/fio/parse.c 00:16:52.517 1 8 libtcmalloc_minimal.so 00:16:52.517 1 904 libcrypto.so 00:16:52.517 ----------------------------------------------------- 00:16:52.517 00:16:52.517 18:15:02 nvme_xnvme.xnvme_fio_plugin -- xnvme/xnvme.sh@30 -- # for io_pattern in "${io_pattern_ref[@]}" 00:16:52.517 18:15:02 nvme_xnvme.xnvme_fio_plugin -- xnvme/xnvme.sh@32 -- # gen_conf 00:16:52.517 18:15:02 nvme_xnvme.xnvme_fio_plugin -- xnvme/xnvme.sh@32 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf=/dev/fd/62 --filename=xnvme_bdev --direct=1 --bs=4k --iodepth=64 --numjobs=1 --rw=randwrite --time_based --runtime=5 --thread=1 --name xnvme_bdev 00:16:52.517 18:15:02 nvme_xnvme.xnvme_fio_plugin -- dd/common.sh@31 -- # xtrace_disable 00:16:52.517 18:15:02 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1360 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf=/dev/fd/62 --filename=xnvme_bdev --direct=1 --bs=4k --iodepth=64 --numjobs=1 --rw=randwrite --time_based --runtime=5 --thread=1 --name xnvme_bdev 00:16:52.517 18:15:02 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@10 -- # set +x 00:16:52.517 18:15:02 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1341 -- # local fio_dir=/usr/src/fio 00:16:52.517 18:15:02 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1343 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:16:52.517 18:15:02 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1343 -- # local sanitizers 00:16:52.517 18:15:02 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1344 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:16:52.517 18:15:02 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1345 -- # shift 00:16:52.517 18:15:02 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1347 -- # local asan_lib= 00:16:52.517 18:15:02 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 00:16:52.517 18:15:02 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1349 -- # grep libasan 00:16:52.517 18:15:02 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1349 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:16:52.517 18:15:02 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1349 -- # awk '{print $3}' 00:16:52.517 { 00:16:52.517 "subsystems": [ 00:16:52.517 { 00:16:52.517 "subsystem": "bdev", 00:16:52.517 "config": [ 00:16:52.517 { 00:16:52.517 "params": { 00:16:52.517 "io_mechanism": "libaio", 00:16:52.517 "conserve_cpu": false, 00:16:52.517 "filename": "/dev/nvme0n1", 00:16:52.517 "name": "xnvme_bdev" 00:16:52.517 }, 00:16:52.517 "method": "bdev_xnvme_create" 00:16:52.517 }, 00:16:52.517 { 00:16:52.517 "method": "bdev_wait_for_examine" 00:16:52.517 } 00:16:52.517 ] 00:16:52.517 } 00:16:52.517 ] 00:16:52.517 } 00:16:52.517 18:15:02 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1349 -- # asan_lib=/usr/lib64/libasan.so.8 00:16:52.517 18:15:02 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1350 -- # [[ -n /usr/lib64/libasan.so.8 ]] 00:16:52.517 18:15:02 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1351 -- # break 00:16:52.517 18:15:02 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1356 -- # LD_PRELOAD='/usr/lib64/libasan.so.8 /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev' 00:16:52.517 18:15:02 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1356 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf=/dev/fd/62 --filename=xnvme_bdev --direct=1 --bs=4k --iodepth=64 --numjobs=1 --rw=randwrite --time_based --runtime=5 --thread=1 --name xnvme_bdev 00:16:52.776 xnvme_bdev: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=64 00:16:52.776 fio-3.35 00:16:52.776 Starting 1 thread 00:16:59.391 00:16:59.391 xnvme_bdev: (groupid=0, jobs=1): err= 0: pid=70806: Fri Dec 6 18:15:08 2024 00:16:59.391 write: IOPS=46.7k, BW=182MiB/s (191MB/s)(912MiB/5001msec); 0 zone resets 00:16:59.391 slat (usec): min=4, max=1184, avg=18.39, stdev=24.84 00:16:59.391 clat (usec): min=15, max=8334, avg=834.34, stdev=548.75 00:16:59.391 lat (usec): min=74, max=8350, avg=852.74, stdev=553.29 00:16:59.391 clat percentiles (usec): 00:16:59.391 | 1.00th=[ 186], 5.00th=[ 269], 10.00th=[ 338], 20.00th=[ 457], 00:16:59.391 | 30.00th=[ 553], 40.00th=[ 652], 50.00th=[ 742], 60.00th=[ 840], 00:16:59.391 | 70.00th=[ 938], 80.00th=[ 1074], 90.00th=[ 1303], 95.00th=[ 1745], 00:16:59.391 | 99.00th=[ 3261], 99.50th=[ 3851], 99.90th=[ 4686], 99.95th=[ 5014], 00:16:59.391 | 99.99th=[ 6587] 00:16:59.391 bw ( KiB/s): min=152424, max=202840, per=99.12%, avg=185121.33, stdev=14008.23, samples=9 00:16:59.391 iops : min=38106, max=50710, avg=46280.33, stdev=3502.06, samples=9 00:16:59.391 lat (usec) : 20=0.01%, 50=0.01%, 100=0.02%, 250=3.92%, 500=20.40% 00:16:59.391 lat (usec) : 750=26.56%, 1000=24.23% 00:16:59.391 lat (msec) : 2=21.17%, 4=3.31%, 10=0.40% 00:16:59.391 cpu : usr=28.20%, sys=51.06%, ctx=96, majf=0, minf=765 00:16:59.391 IO depths : 1=0.1%, 2=0.9%, 4=3.6%, 8=9.9%, 16=25.3%, 32=58.4%, >=64=1.9% 00:16:59.391 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:16:59.391 complete : 0=0.0%, 4=98.1%, 8=0.1%, 16=0.1%, 32=0.1%, 64=1.7%, >=64=0.0% 00:16:59.391 issued rwts: total=0,233501,0,0 short=0,0,0,0 dropped=0,0,0,0 00:16:59.391 latency : target=0, window=0, percentile=100.00%, depth=64 00:16:59.391 00:16:59.391 Run status group 0 (all jobs): 00:16:59.391 WRITE: bw=182MiB/s (191MB/s), 182MiB/s-182MiB/s (191MB/s-191MB/s), io=912MiB (956MB), run=5001-5001msec 00:16:59.957 ----------------------------------------------------- 00:16:59.957 Suppressions used: 00:16:59.957 count bytes template 00:16:59.957 1 11 /usr/src/fio/parse.c 00:16:59.957 1 8 libtcmalloc_minimal.so 00:16:59.957 1 904 libcrypto.so 00:16:59.957 ----------------------------------------------------- 00:16:59.957 00:16:59.957 00:16:59.957 real 0m14.880s 00:16:59.957 user 0m6.377s 00:16:59.957 sys 0m5.977s 00:16:59.957 18:15:10 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1130 -- # xtrace_disable 00:16:59.957 18:15:10 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@10 -- # set +x 00:16:59.957 ************************************ 00:16:59.957 END TEST xnvme_fio_plugin 00:16:59.957 ************************************ 00:16:59.957 18:15:10 nvme_xnvme -- xnvme/xnvme.sh@82 -- # for cc in "${xnvme_conserve_cpu[@]}" 00:16:59.957 18:15:10 nvme_xnvme -- xnvme/xnvme.sh@83 -- # method_bdev_xnvme_create_0["conserve_cpu"]=true 00:16:59.957 18:15:10 nvme_xnvme -- xnvme/xnvme.sh@84 -- # conserve_cpu=true 00:16:59.957 18:15:10 nvme_xnvme -- xnvme/xnvme.sh@86 -- # run_test xnvme_rpc xnvme_rpc 00:16:59.957 18:15:10 nvme_xnvme -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:16:59.957 18:15:10 nvme_xnvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:16:59.957 18:15:10 nvme_xnvme -- common/autotest_common.sh@10 -- # set +x 00:16:59.957 ************************************ 00:16:59.957 START TEST xnvme_rpc 00:16:59.957 ************************************ 00:16:59.957 18:15:10 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@1129 -- # xnvme_rpc 00:16:59.957 18:15:10 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@48 -- # cc=() 00:16:59.957 18:15:10 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@48 -- # local -A cc 00:16:59.957 18:15:10 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@50 -- # cc["false"]= 00:16:59.957 18:15:10 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@50 -- # cc["true"]=-c 00:16:59.957 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:16:59.957 18:15:10 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@53 -- # spdk_tgt=70888 00:16:59.957 18:15:10 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@52 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:16:59.957 18:15:10 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@54 -- # waitforlisten 70888 00:16:59.957 18:15:10 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@835 -- # '[' -z 70888 ']' 00:16:59.957 18:15:10 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:16:59.957 18:15:10 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@840 -- # local max_retries=100 00:16:59.957 18:15:10 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:16:59.957 18:15:10 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@844 -- # xtrace_disable 00:16:59.957 18:15:10 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@10 -- # set +x 00:17:00.216 [2024-12-06 18:15:10.566506] Starting SPDK v25.01-pre git sha1 1148849d6 / DPDK 24.03.0 initialization... 00:17:00.216 [2024-12-06 18:15:10.566623] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid70888 ] 00:17:00.216 [2024-12-06 18:15:10.749372] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:17:00.474 [2024-12-06 18:15:10.859460] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:17:01.418 18:15:11 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:17:01.418 18:15:11 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@868 -- # return 0 00:17:01.418 18:15:11 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@56 -- # rpc_cmd bdev_xnvme_create /dev/nvme0n1 xnvme_bdev libaio -c 00:17:01.418 18:15:11 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:01.418 18:15:11 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@10 -- # set +x 00:17:01.418 xnvme_bdev 00:17:01.418 18:15:11 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:01.418 18:15:11 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@62 -- # rpc_xnvme name 00:17:01.418 18:15:11 nvme_xnvme.xnvme_rpc -- xnvme/common.sh@65 -- # rpc_cmd framework_get_config bdev 00:17:01.418 18:15:11 nvme_xnvme.xnvme_rpc -- xnvme/common.sh@66 -- # jq -r '.[] | select(.method == "bdev_xnvme_create").params.name' 00:17:01.418 18:15:11 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:01.418 18:15:11 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@10 -- # set +x 00:17:01.418 18:15:11 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:01.418 18:15:11 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@62 -- # [[ xnvme_bdev == \x\n\v\m\e\_\b\d\e\v ]] 00:17:01.418 18:15:11 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@63 -- # rpc_xnvme filename 00:17:01.418 18:15:11 nvme_xnvme.xnvme_rpc -- xnvme/common.sh@65 -- # rpc_cmd framework_get_config bdev 00:17:01.418 18:15:11 nvme_xnvme.xnvme_rpc -- xnvme/common.sh@66 -- # jq -r '.[] | select(.method == "bdev_xnvme_create").params.filename' 00:17:01.418 18:15:11 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:01.418 18:15:11 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@10 -- # set +x 00:17:01.418 18:15:11 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:01.418 18:15:11 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@63 -- # [[ /dev/nvme0n1 == \/\d\e\v\/\n\v\m\e\0\n\1 ]] 00:17:01.418 18:15:11 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@64 -- # rpc_xnvme io_mechanism 00:17:01.418 18:15:11 nvme_xnvme.xnvme_rpc -- xnvme/common.sh@65 -- # rpc_cmd framework_get_config bdev 00:17:01.418 18:15:11 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:01.418 18:15:11 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@10 -- # set +x 00:17:01.418 18:15:11 nvme_xnvme.xnvme_rpc -- xnvme/common.sh@66 -- # jq -r '.[] | select(.method == "bdev_xnvme_create").params.io_mechanism' 00:17:01.418 18:15:11 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:01.418 18:15:11 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@64 -- # [[ libaio == \l\i\b\a\i\o ]] 00:17:01.418 18:15:11 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@65 -- # rpc_xnvme conserve_cpu 00:17:01.418 18:15:11 nvme_xnvme.xnvme_rpc -- xnvme/common.sh@65 -- # rpc_cmd framework_get_config bdev 00:17:01.418 18:15:11 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:01.418 18:15:11 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@10 -- # set +x 00:17:01.418 18:15:11 nvme_xnvme.xnvme_rpc -- xnvme/common.sh@66 -- # jq -r '.[] | select(.method == "bdev_xnvme_create").params.conserve_cpu' 00:17:01.418 18:15:11 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:01.418 18:15:11 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@65 -- # [[ true == \t\r\u\e ]] 00:17:01.418 18:15:11 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@67 -- # rpc_cmd bdev_xnvme_delete xnvme_bdev 00:17:01.418 18:15:11 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:01.418 18:15:11 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@10 -- # set +x 00:17:01.418 18:15:11 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:01.418 18:15:11 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@70 -- # killprocess 70888 00:17:01.418 18:15:11 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@954 -- # '[' -z 70888 ']' 00:17:01.418 18:15:11 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@958 -- # kill -0 70888 00:17:01.418 18:15:11 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@959 -- # uname 00:17:01.418 18:15:11 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:17:01.418 18:15:11 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 70888 00:17:01.418 killing process with pid 70888 00:17:01.418 18:15:11 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:17:01.418 18:15:11 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:17:01.418 18:15:11 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@972 -- # echo 'killing process with pid 70888' 00:17:01.418 18:15:11 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@973 -- # kill 70888 00:17:01.418 18:15:11 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@978 -- # wait 70888 00:17:03.951 00:17:03.951 real 0m3.883s 00:17:03.951 user 0m3.935s 00:17:03.951 sys 0m0.513s 00:17:03.951 18:15:14 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@1130 -- # xtrace_disable 00:17:03.951 ************************************ 00:17:03.951 END TEST xnvme_rpc 00:17:03.951 ************************************ 00:17:03.951 18:15:14 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@10 -- # set +x 00:17:03.951 18:15:14 nvme_xnvme -- xnvme/xnvme.sh@87 -- # run_test xnvme_bdevperf xnvme_bdevperf 00:17:03.951 18:15:14 nvme_xnvme -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:17:03.951 18:15:14 nvme_xnvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:17:03.951 18:15:14 nvme_xnvme -- common/autotest_common.sh@10 -- # set +x 00:17:03.951 ************************************ 00:17:03.951 START TEST xnvme_bdevperf 00:17:03.951 ************************************ 00:17:03.951 18:15:14 nvme_xnvme.xnvme_bdevperf -- common/autotest_common.sh@1129 -- # xnvme_bdevperf 00:17:03.951 18:15:14 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@12 -- # local io_pattern 00:17:03.951 18:15:14 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@13 -- # local -n io_pattern_ref=libaio 00:17:03.951 18:15:14 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@15 -- # for io_pattern in "${io_pattern_ref[@]}" 00:17:03.951 18:15:14 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@17 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /dev/fd/62 -q 64 -w randread -t 5 -T xnvme_bdev -o 4096 00:17:03.951 18:15:14 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@17 -- # gen_conf 00:17:03.951 18:15:14 nvme_xnvme.xnvme_bdevperf -- dd/common.sh@31 -- # xtrace_disable 00:17:03.951 18:15:14 nvme_xnvme.xnvme_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:17:03.951 { 00:17:03.951 "subsystems": [ 00:17:03.951 { 00:17:03.951 "subsystem": "bdev", 00:17:03.951 "config": [ 00:17:03.951 { 00:17:03.951 "params": { 00:17:03.951 "io_mechanism": "libaio", 00:17:03.951 "conserve_cpu": true, 00:17:03.951 "filename": "/dev/nvme0n1", 00:17:03.951 "name": "xnvme_bdev" 00:17:03.951 }, 00:17:03.951 "method": "bdev_xnvme_create" 00:17:03.951 }, 00:17:03.951 { 00:17:03.951 "method": "bdev_wait_for_examine" 00:17:03.951 } 00:17:03.951 ] 00:17:03.951 } 00:17:03.951 ] 00:17:03.951 } 00:17:03.951 [2024-12-06 18:15:14.516870] Starting SPDK v25.01-pre git sha1 1148849d6 / DPDK 24.03.0 initialization... 00:17:03.951 [2024-12-06 18:15:14.517145] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid70973 ] 00:17:04.211 [2024-12-06 18:15:14.699543] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:17:04.470 [2024-12-06 18:15:14.811914] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:17:04.729 Running I/O for 5 seconds... 00:17:06.598 40165.00 IOPS, 156.89 MiB/s [2024-12-06T18:15:18.548Z] 42670.50 IOPS, 166.68 MiB/s [2024-12-06T18:15:19.484Z] 42443.00 IOPS, 165.79 MiB/s [2024-12-06T18:15:20.419Z] 41527.25 IOPS, 162.22 MiB/s [2024-12-06T18:15:20.419Z] 41505.80 IOPS, 162.13 MiB/s 00:17:09.843 Latency(us) 00:17:09.843 [2024-12-06T18:15:20.419Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:17:09.843 Job: xnvme_bdev (Core Mask 0x1, workload: randread, depth: 64, IO size: 4096) 00:17:09.843 xnvme_bdev : 5.00 41471.08 162.00 0.00 0.00 1538.97 235.23 5658.73 00:17:09.843 [2024-12-06T18:15:20.419Z] =================================================================================================================== 00:17:09.843 [2024-12-06T18:15:20.419Z] Total : 41471.08 162.00 0.00 0.00 1538.97 235.23 5658.73 00:17:10.779 18:15:21 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@15 -- # for io_pattern in "${io_pattern_ref[@]}" 00:17:10.779 18:15:21 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@17 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /dev/fd/62 -q 64 -w randwrite -t 5 -T xnvme_bdev -o 4096 00:17:10.779 18:15:21 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@17 -- # gen_conf 00:17:10.779 18:15:21 nvme_xnvme.xnvme_bdevperf -- dd/common.sh@31 -- # xtrace_disable 00:17:10.779 18:15:21 nvme_xnvme.xnvme_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:17:11.039 { 00:17:11.039 "subsystems": [ 00:17:11.039 { 00:17:11.039 "subsystem": "bdev", 00:17:11.039 "config": [ 00:17:11.039 { 00:17:11.039 "params": { 00:17:11.039 "io_mechanism": "libaio", 00:17:11.039 "conserve_cpu": true, 00:17:11.039 "filename": "/dev/nvme0n1", 00:17:11.039 "name": "xnvme_bdev" 00:17:11.039 }, 00:17:11.039 "method": "bdev_xnvme_create" 00:17:11.039 }, 00:17:11.039 { 00:17:11.039 "method": "bdev_wait_for_examine" 00:17:11.039 } 00:17:11.039 ] 00:17:11.039 } 00:17:11.039 ] 00:17:11.039 } 00:17:11.039 [2024-12-06 18:15:21.407969] Starting SPDK v25.01-pre git sha1 1148849d6 / DPDK 24.03.0 initialization... 00:17:11.039 [2024-12-06 18:15:21.408088] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid71054 ] 00:17:11.039 [2024-12-06 18:15:21.580844] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:17:11.297 [2024-12-06 18:15:21.695969] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:17:11.556 Running I/O for 5 seconds... 00:17:13.499 42323.00 IOPS, 165.32 MiB/s [2024-12-06T18:15:25.459Z] 42348.00 IOPS, 165.42 MiB/s [2024-12-06T18:15:26.404Z] 41122.33 IOPS, 160.63 MiB/s [2024-12-06T18:15:27.336Z] 40973.75 IOPS, 160.05 MiB/s 00:17:16.760 Latency(us) 00:17:16.760 [2024-12-06T18:15:27.336Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:17:16.760 Job: xnvme_bdev (Core Mask 0x1, workload: randwrite, depth: 64, IO size: 4096) 00:17:16.760 xnvme_bdev : 5.00 41144.44 160.72 0.00 0.00 1551.63 56.75 12528.17 00:17:16.760 [2024-12-06T18:15:27.336Z] =================================================================================================================== 00:17:16.760 [2024-12-06T18:15:27.336Z] Total : 41144.44 160.72 0.00 0.00 1551.63 56.75 12528.17 00:17:17.733 00:17:17.733 real 0m13.832s 00:17:17.733 user 0m5.307s 00:17:17.733 sys 0m5.760s 00:17:17.733 ************************************ 00:17:17.733 END TEST xnvme_bdevperf 00:17:17.733 ************************************ 00:17:17.733 18:15:28 nvme_xnvme.xnvme_bdevperf -- common/autotest_common.sh@1130 -- # xtrace_disable 00:17:17.733 18:15:28 nvme_xnvme.xnvme_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:17:17.733 18:15:28 nvme_xnvme -- xnvme/xnvme.sh@88 -- # run_test xnvme_fio_plugin xnvme_fio_plugin 00:17:17.733 18:15:28 nvme_xnvme -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:17:17.733 18:15:28 nvme_xnvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:17:17.733 18:15:28 nvme_xnvme -- common/autotest_common.sh@10 -- # set +x 00:17:17.990 ************************************ 00:17:17.990 START TEST xnvme_fio_plugin 00:17:17.990 ************************************ 00:17:17.990 18:15:28 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1129 -- # xnvme_fio_plugin 00:17:17.990 18:15:28 nvme_xnvme.xnvme_fio_plugin -- xnvme/xnvme.sh@27 -- # local io_pattern 00:17:17.990 18:15:28 nvme_xnvme.xnvme_fio_plugin -- xnvme/xnvme.sh@28 -- # local -n io_pattern_ref=libaio_fio 00:17:17.990 18:15:28 nvme_xnvme.xnvme_fio_plugin -- xnvme/xnvme.sh@30 -- # for io_pattern in "${io_pattern_ref[@]}" 00:17:17.991 18:15:28 nvme_xnvme.xnvme_fio_plugin -- xnvme/xnvme.sh@32 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf=/dev/fd/62 --filename=xnvme_bdev --direct=1 --bs=4k --iodepth=64 --numjobs=1 --rw=randread --time_based --runtime=5 --thread=1 --name xnvme_bdev 00:17:17.991 18:15:28 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1360 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf=/dev/fd/62 --filename=xnvme_bdev --direct=1 --bs=4k --iodepth=64 --numjobs=1 --rw=randread --time_based --runtime=5 --thread=1 --name xnvme_bdev 00:17:17.991 18:15:28 nvme_xnvme.xnvme_fio_plugin -- xnvme/xnvme.sh@32 -- # gen_conf 00:17:17.991 18:15:28 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1341 -- # local fio_dir=/usr/src/fio 00:17:17.991 18:15:28 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1343 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:17:17.991 18:15:28 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1343 -- # local sanitizers 00:17:17.991 18:15:28 nvme_xnvme.xnvme_fio_plugin -- dd/common.sh@31 -- # xtrace_disable 00:17:17.991 18:15:28 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1344 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:17:17.991 18:15:28 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1345 -- # shift 00:17:17.991 18:15:28 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1347 -- # local asan_lib= 00:17:17.991 18:15:28 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 00:17:17.991 18:15:28 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@10 -- # set +x 00:17:17.991 18:15:28 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1349 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:17:17.991 18:15:28 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1349 -- # awk '{print $3}' 00:17:17.991 18:15:28 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1349 -- # grep libasan 00:17:17.991 18:15:28 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1349 -- # asan_lib=/usr/lib64/libasan.so.8 00:17:17.991 18:15:28 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1350 -- # [[ -n /usr/lib64/libasan.so.8 ]] 00:17:17.991 18:15:28 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1351 -- # break 00:17:17.991 18:15:28 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1356 -- # LD_PRELOAD='/usr/lib64/libasan.so.8 /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev' 00:17:17.991 18:15:28 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1356 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf=/dev/fd/62 --filename=xnvme_bdev --direct=1 --bs=4k --iodepth=64 --numjobs=1 --rw=randread --time_based --runtime=5 --thread=1 --name xnvme_bdev 00:17:17.991 { 00:17:17.991 "subsystems": [ 00:17:17.991 { 00:17:17.991 "subsystem": "bdev", 00:17:17.991 "config": [ 00:17:17.991 { 00:17:17.991 "params": { 00:17:17.991 "io_mechanism": "libaio", 00:17:17.991 "conserve_cpu": true, 00:17:17.991 "filename": "/dev/nvme0n1", 00:17:17.991 "name": "xnvme_bdev" 00:17:17.991 }, 00:17:17.991 "method": "bdev_xnvme_create" 00:17:17.991 }, 00:17:17.991 { 00:17:17.991 "method": "bdev_wait_for_examine" 00:17:17.991 } 00:17:17.991 ] 00:17:17.991 } 00:17:17.991 ] 00:17:17.991 } 00:17:17.991 xnvme_bdev: (g=0): rw=randread, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=64 00:17:17.991 fio-3.35 00:17:17.991 Starting 1 thread 00:17:24.552 00:17:24.552 xnvme_bdev: (groupid=0, jobs=1): err= 0: pid=71179: Fri Dec 6 18:15:34 2024 00:17:24.552 read: IOPS=44.7k, BW=175MiB/s (183MB/s)(873MiB/5001msec) 00:17:24.552 slat (usec): min=4, max=972, avg=19.35, stdev=26.01 00:17:24.552 clat (usec): min=61, max=6535, avg=858.16, stdev=571.40 00:17:24.552 lat (usec): min=96, max=6636, avg=877.52, stdev=576.48 00:17:24.552 clat percentiles (usec): 00:17:24.552 | 1.00th=[ 182], 5.00th=[ 265], 10.00th=[ 338], 20.00th=[ 457], 00:17:24.552 | 30.00th=[ 570], 40.00th=[ 668], 50.00th=[ 766], 60.00th=[ 865], 00:17:24.552 | 70.00th=[ 963], 80.00th=[ 1090], 90.00th=[ 1352], 95.00th=[ 1778], 00:17:24.552 | 99.00th=[ 3392], 99.50th=[ 3982], 99.90th=[ 4752], 99.95th=[ 5080], 00:17:24.552 | 99.99th=[ 6325] 00:17:24.552 bw ( KiB/s): min=164432, max=200976, per=99.49%, avg=177865.78, stdev=11970.69, samples=9 00:17:24.552 iops : min=41108, max=50244, avg=44466.44, stdev=2992.67, samples=9 00:17:24.552 lat (usec) : 100=0.03%, 250=4.13%, 500=19.74%, 750=24.57%, 1000=24.55% 00:17:24.552 lat (msec) : 2=22.96%, 4=3.53%, 10=0.49% 00:17:24.552 cpu : usr=27.36%, sys=52.10%, ctx=111, majf=0, minf=764 00:17:24.552 IO depths : 1=0.1%, 2=1.0%, 4=3.8%, 8=10.3%, 16=25.3%, 32=57.7%, >=64=1.9% 00:17:24.552 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:17:24.552 complete : 0=0.0%, 4=98.2%, 8=0.1%, 16=0.1%, 32=0.1%, 64=1.7%, >=64=0.0% 00:17:24.552 issued rwts: total=223510,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:17:24.553 latency : target=0, window=0, percentile=100.00%, depth=64 00:17:24.553 00:17:24.553 Run status group 0 (all jobs): 00:17:24.553 READ: bw=175MiB/s (183MB/s), 175MiB/s-175MiB/s (183MB/s-183MB/s), io=873MiB (915MB), run=5001-5001msec 00:17:25.488 ----------------------------------------------------- 00:17:25.488 Suppressions used: 00:17:25.488 count bytes template 00:17:25.488 1 11 /usr/src/fio/parse.c 00:17:25.488 1 8 libtcmalloc_minimal.so 00:17:25.488 1 904 libcrypto.so 00:17:25.488 ----------------------------------------------------- 00:17:25.488 00:17:25.488 18:15:35 nvme_xnvme.xnvme_fio_plugin -- xnvme/xnvme.sh@30 -- # for io_pattern in "${io_pattern_ref[@]}" 00:17:25.488 18:15:35 nvme_xnvme.xnvme_fio_plugin -- xnvme/xnvme.sh@32 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf=/dev/fd/62 --filename=xnvme_bdev --direct=1 --bs=4k --iodepth=64 --numjobs=1 --rw=randwrite --time_based --runtime=5 --thread=1 --name xnvme_bdev 00:17:25.488 18:15:35 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1360 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf=/dev/fd/62 --filename=xnvme_bdev --direct=1 --bs=4k --iodepth=64 --numjobs=1 --rw=randwrite --time_based --runtime=5 --thread=1 --name xnvme_bdev 00:17:25.488 18:15:35 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1341 -- # local fio_dir=/usr/src/fio 00:17:25.488 18:15:35 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1343 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:17:25.488 18:15:35 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1343 -- # local sanitizers 00:17:25.488 18:15:35 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1344 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:17:25.488 18:15:35 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1345 -- # shift 00:17:25.488 18:15:35 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1347 -- # local asan_lib= 00:17:25.488 18:15:35 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 00:17:25.488 18:15:35 nvme_xnvme.xnvme_fio_plugin -- xnvme/xnvme.sh@32 -- # gen_conf 00:17:25.488 18:15:35 nvme_xnvme.xnvme_fio_plugin -- dd/common.sh@31 -- # xtrace_disable 00:17:25.488 18:15:35 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@10 -- # set +x 00:17:25.488 18:15:35 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1349 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:17:25.488 18:15:35 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1349 -- # grep libasan 00:17:25.488 18:15:35 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1349 -- # awk '{print $3}' 00:17:25.488 18:15:35 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1349 -- # asan_lib=/usr/lib64/libasan.so.8 00:17:25.488 18:15:35 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1350 -- # [[ -n /usr/lib64/libasan.so.8 ]] 00:17:25.488 18:15:35 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1351 -- # break 00:17:25.488 18:15:35 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1356 -- # LD_PRELOAD='/usr/lib64/libasan.so.8 /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev' 00:17:25.488 18:15:35 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1356 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf=/dev/fd/62 --filename=xnvme_bdev --direct=1 --bs=4k --iodepth=64 --numjobs=1 --rw=randwrite --time_based --runtime=5 --thread=1 --name xnvme_bdev 00:17:25.488 { 00:17:25.488 "subsystems": [ 00:17:25.488 { 00:17:25.488 "subsystem": "bdev", 00:17:25.488 "config": [ 00:17:25.488 { 00:17:25.488 "params": { 00:17:25.488 "io_mechanism": "libaio", 00:17:25.488 "conserve_cpu": true, 00:17:25.488 "filename": "/dev/nvme0n1", 00:17:25.488 "name": "xnvme_bdev" 00:17:25.488 }, 00:17:25.488 "method": "bdev_xnvme_create" 00:17:25.488 }, 00:17:25.488 { 00:17:25.488 "method": "bdev_wait_for_examine" 00:17:25.488 } 00:17:25.488 ] 00:17:25.488 } 00:17:25.488 ] 00:17:25.488 } 00:17:25.488 xnvme_bdev: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=64 00:17:25.488 fio-3.35 00:17:25.488 Starting 1 thread 00:17:32.157 00:17:32.158 xnvme_bdev: (groupid=0, jobs=1): err= 0: pid=71276: Fri Dec 6 18:15:41 2024 00:17:32.158 write: IOPS=41.8k, BW=163MiB/s (171MB/s)(816MiB/5001msec); 0 zone resets 00:17:32.158 slat (usec): min=4, max=1352, avg=20.03, stdev=27.54 00:17:32.158 clat (usec): min=22, max=29292, avg=942.15, stdev=771.38 00:17:32.158 lat (usec): min=60, max=29299, avg=962.18, stdev=773.61 00:17:32.158 clat percentiles (usec): 00:17:32.158 | 1.00th=[ 178], 5.00th=[ 269], 10.00th=[ 355], 20.00th=[ 498], 00:17:32.158 | 30.00th=[ 619], 40.00th=[ 725], 50.00th=[ 832], 60.00th=[ 947], 00:17:32.158 | 70.00th=[ 1074], 80.00th=[ 1237], 90.00th=[ 1532], 95.00th=[ 1909], 00:17:32.158 | 99.00th=[ 3425], 99.50th=[ 4047], 99.90th=[ 5276], 99.95th=[ 7570], 00:17:32.158 | 99.99th=[29230] 00:17:32.158 bw ( KiB/s): min=148120, max=190856, per=100.00%, avg=169369.89, stdev=14813.26, samples=9 00:17:32.158 iops : min=37030, max=47714, avg=42342.44, stdev=3703.31, samples=9 00:17:32.158 lat (usec) : 50=0.01%, 100=0.07%, 250=3.90%, 500=16.28%, 750=22.10% 00:17:32.158 lat (usec) : 1000=22.25% 00:17:32.158 lat (msec) : 2=31.00%, 4=3.86%, 10=0.50%, 50=0.03% 00:17:32.158 cpu : usr=29.62%, sys=48.82%, ctx=109, majf=0, minf=765 00:17:32.158 IO depths : 1=0.1%, 2=1.1%, 4=3.8%, 8=10.1%, 16=24.7%, 32=58.2%, >=64=2.0% 00:17:32.158 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:17:32.158 complete : 0=0.0%, 4=98.1%, 8=0.1%, 16=0.1%, 32=0.2%, 64=1.6%, >=64=0.0% 00:17:32.158 issued rwts: total=0,208945,0,0 short=0,0,0,0 dropped=0,0,0,0 00:17:32.158 latency : target=0, window=0, percentile=100.00%, depth=64 00:17:32.158 00:17:32.158 Run status group 0 (all jobs): 00:17:32.158 WRITE: bw=163MiB/s (171MB/s), 163MiB/s-163MiB/s (171MB/s-171MB/s), io=816MiB (856MB), run=5001-5001msec 00:17:32.723 ----------------------------------------------------- 00:17:32.723 Suppressions used: 00:17:32.723 count bytes template 00:17:32.723 1 11 /usr/src/fio/parse.c 00:17:32.723 1 8 libtcmalloc_minimal.so 00:17:32.723 1 904 libcrypto.so 00:17:32.723 ----------------------------------------------------- 00:17:32.723 00:17:32.723 00:17:32.723 real 0m14.842s 00:17:32.723 user 0m6.626s 00:17:32.723 sys 0m5.783s 00:17:32.723 18:15:43 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1130 -- # xtrace_disable 00:17:32.723 18:15:43 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@10 -- # set +x 00:17:32.723 ************************************ 00:17:32.723 END TEST xnvme_fio_plugin 00:17:32.723 ************************************ 00:17:32.723 18:15:43 nvme_xnvme -- xnvme/xnvme.sh@75 -- # for io in "${xnvme_io[@]}" 00:17:32.723 18:15:43 nvme_xnvme -- xnvme/xnvme.sh@76 -- # method_bdev_xnvme_create_0["io_mechanism"]=io_uring 00:17:32.723 18:15:43 nvme_xnvme -- xnvme/xnvme.sh@77 -- # method_bdev_xnvme_create_0["filename"]=/dev/nvme0n1 00:17:32.723 18:15:43 nvme_xnvme -- xnvme/xnvme.sh@79 -- # filename=/dev/nvme0n1 00:17:32.723 18:15:43 nvme_xnvme -- xnvme/xnvme.sh@80 -- # name=xnvme_bdev 00:17:32.723 18:15:43 nvme_xnvme -- xnvme/xnvme.sh@82 -- # for cc in "${xnvme_conserve_cpu[@]}" 00:17:32.723 18:15:43 nvme_xnvme -- xnvme/xnvme.sh@83 -- # method_bdev_xnvme_create_0["conserve_cpu"]=false 00:17:32.723 18:15:43 nvme_xnvme -- xnvme/xnvme.sh@84 -- # conserve_cpu=false 00:17:32.723 18:15:43 nvme_xnvme -- xnvme/xnvme.sh@86 -- # run_test xnvme_rpc xnvme_rpc 00:17:32.723 18:15:43 nvme_xnvme -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:17:32.723 18:15:43 nvme_xnvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:17:32.723 18:15:43 nvme_xnvme -- common/autotest_common.sh@10 -- # set +x 00:17:32.723 ************************************ 00:17:32.723 START TEST xnvme_rpc 00:17:32.723 ************************************ 00:17:32.723 18:15:43 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@1129 -- # xnvme_rpc 00:17:32.723 18:15:43 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@48 -- # cc=() 00:17:32.723 18:15:43 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@48 -- # local -A cc 00:17:32.723 18:15:43 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@50 -- # cc["false"]= 00:17:32.723 18:15:43 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@50 -- # cc["true"]=-c 00:17:32.723 18:15:43 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@53 -- # spdk_tgt=71361 00:17:32.723 18:15:43 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@52 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:17:32.723 18:15:43 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@54 -- # waitforlisten 71361 00:17:32.723 18:15:43 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@835 -- # '[' -z 71361 ']' 00:17:32.723 18:15:43 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:17:32.723 18:15:43 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@840 -- # local max_retries=100 00:17:32.723 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:17:32.723 18:15:43 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:17:32.723 18:15:43 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@844 -- # xtrace_disable 00:17:32.723 18:15:43 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@10 -- # set +x 00:17:32.981 [2024-12-06 18:15:43.346546] Starting SPDK v25.01-pre git sha1 1148849d6 / DPDK 24.03.0 initialization... 00:17:32.981 [2024-12-06 18:15:43.346674] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid71361 ] 00:17:32.981 [2024-12-06 18:15:43.526966] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:17:33.239 [2024-12-06 18:15:43.644585] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:17:34.173 18:15:44 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:17:34.173 18:15:44 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@868 -- # return 0 00:17:34.173 18:15:44 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@56 -- # rpc_cmd bdev_xnvme_create /dev/nvme0n1 xnvme_bdev io_uring '' 00:17:34.173 18:15:44 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:34.173 18:15:44 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@10 -- # set +x 00:17:34.173 xnvme_bdev 00:17:34.173 18:15:44 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:34.173 18:15:44 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@62 -- # rpc_xnvme name 00:17:34.173 18:15:44 nvme_xnvme.xnvme_rpc -- xnvme/common.sh@66 -- # jq -r '.[] | select(.method == "bdev_xnvme_create").params.name' 00:17:34.173 18:15:44 nvme_xnvme.xnvme_rpc -- xnvme/common.sh@65 -- # rpc_cmd framework_get_config bdev 00:17:34.173 18:15:44 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:34.173 18:15:44 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@10 -- # set +x 00:17:34.173 18:15:44 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:34.173 18:15:44 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@62 -- # [[ xnvme_bdev == \x\n\v\m\e\_\b\d\e\v ]] 00:17:34.173 18:15:44 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@63 -- # rpc_xnvme filename 00:17:34.173 18:15:44 nvme_xnvme.xnvme_rpc -- xnvme/common.sh@65 -- # rpc_cmd framework_get_config bdev 00:17:34.173 18:15:44 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:34.173 18:15:44 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@10 -- # set +x 00:17:34.173 18:15:44 nvme_xnvme.xnvme_rpc -- xnvme/common.sh@66 -- # jq -r '.[] | select(.method == "bdev_xnvme_create").params.filename' 00:17:34.173 18:15:44 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:34.173 18:15:44 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@63 -- # [[ /dev/nvme0n1 == \/\d\e\v\/\n\v\m\e\0\n\1 ]] 00:17:34.173 18:15:44 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@64 -- # rpc_xnvme io_mechanism 00:17:34.173 18:15:44 nvme_xnvme.xnvme_rpc -- xnvme/common.sh@65 -- # rpc_cmd framework_get_config bdev 00:17:34.173 18:15:44 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:34.173 18:15:44 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@10 -- # set +x 00:17:34.173 18:15:44 nvme_xnvme.xnvme_rpc -- xnvme/common.sh@66 -- # jq -r '.[] | select(.method == "bdev_xnvme_create").params.io_mechanism' 00:17:34.173 18:15:44 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:34.173 18:15:44 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@64 -- # [[ io_uring == \i\o\_\u\r\i\n\g ]] 00:17:34.173 18:15:44 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@65 -- # rpc_xnvme conserve_cpu 00:17:34.174 18:15:44 nvme_xnvme.xnvme_rpc -- xnvme/common.sh@65 -- # rpc_cmd framework_get_config bdev 00:17:34.174 18:15:44 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:34.174 18:15:44 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@10 -- # set +x 00:17:34.174 18:15:44 nvme_xnvme.xnvme_rpc -- xnvme/common.sh@66 -- # jq -r '.[] | select(.method == "bdev_xnvme_create").params.conserve_cpu' 00:17:34.174 18:15:44 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:34.174 18:15:44 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@65 -- # [[ false == \f\a\l\s\e ]] 00:17:34.174 18:15:44 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@67 -- # rpc_cmd bdev_xnvme_delete xnvme_bdev 00:17:34.174 18:15:44 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:34.174 18:15:44 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@10 -- # set +x 00:17:34.174 18:15:44 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:34.174 18:15:44 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@70 -- # killprocess 71361 00:17:34.174 18:15:44 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@954 -- # '[' -z 71361 ']' 00:17:34.174 18:15:44 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@958 -- # kill -0 71361 00:17:34.174 18:15:44 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@959 -- # uname 00:17:34.174 18:15:44 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:17:34.174 18:15:44 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 71361 00:17:34.431 18:15:44 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:17:34.431 18:15:44 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:17:34.431 18:15:44 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@972 -- # echo 'killing process with pid 71361' 00:17:34.431 killing process with pid 71361 00:17:34.431 18:15:44 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@973 -- # kill 71361 00:17:34.431 18:15:44 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@978 -- # wait 71361 00:17:37.070 00:17:37.070 real 0m3.963s 00:17:37.070 user 0m4.055s 00:17:37.070 sys 0m0.541s 00:17:37.070 18:15:47 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@1130 -- # xtrace_disable 00:17:37.070 18:15:47 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@10 -- # set +x 00:17:37.070 ************************************ 00:17:37.070 END TEST xnvme_rpc 00:17:37.070 ************************************ 00:17:37.070 18:15:47 nvme_xnvme -- xnvme/xnvme.sh@87 -- # run_test xnvme_bdevperf xnvme_bdevperf 00:17:37.070 18:15:47 nvme_xnvme -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:17:37.070 18:15:47 nvme_xnvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:17:37.070 18:15:47 nvme_xnvme -- common/autotest_common.sh@10 -- # set +x 00:17:37.070 ************************************ 00:17:37.070 START TEST xnvme_bdevperf 00:17:37.070 ************************************ 00:17:37.070 18:15:47 nvme_xnvme.xnvme_bdevperf -- common/autotest_common.sh@1129 -- # xnvme_bdevperf 00:17:37.070 18:15:47 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@12 -- # local io_pattern 00:17:37.070 18:15:47 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@13 -- # local -n io_pattern_ref=io_uring 00:17:37.070 18:15:47 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@15 -- # for io_pattern in "${io_pattern_ref[@]}" 00:17:37.070 18:15:47 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@17 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /dev/fd/62 -q 64 -w randread -t 5 -T xnvme_bdev -o 4096 00:17:37.070 18:15:47 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@17 -- # gen_conf 00:17:37.070 18:15:47 nvme_xnvme.xnvme_bdevperf -- dd/common.sh@31 -- # xtrace_disable 00:17:37.070 18:15:47 nvme_xnvme.xnvme_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:17:37.070 { 00:17:37.070 "subsystems": [ 00:17:37.070 { 00:17:37.070 "subsystem": "bdev", 00:17:37.070 "config": [ 00:17:37.070 { 00:17:37.070 "params": { 00:17:37.070 "io_mechanism": "io_uring", 00:17:37.070 "conserve_cpu": false, 00:17:37.070 "filename": "/dev/nvme0n1", 00:17:37.070 "name": "xnvme_bdev" 00:17:37.070 }, 00:17:37.070 "method": "bdev_xnvme_create" 00:17:37.070 }, 00:17:37.070 { 00:17:37.070 "method": "bdev_wait_for_examine" 00:17:37.070 } 00:17:37.070 ] 00:17:37.070 } 00:17:37.070 ] 00:17:37.070 } 00:17:37.070 [2024-12-06 18:15:47.351998] Starting SPDK v25.01-pre git sha1 1148849d6 / DPDK 24.03.0 initialization... 00:17:37.070 [2024-12-06 18:15:47.352121] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid71442 ] 00:17:37.070 [2024-12-06 18:15:47.536282] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:17:37.328 [2024-12-06 18:15:47.652820] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:17:37.587 Running I/O for 5 seconds... 00:17:39.472 40733.00 IOPS, 159.11 MiB/s [2024-12-06T18:15:51.445Z] 44131.00 IOPS, 172.39 MiB/s [2024-12-06T18:15:52.023Z] 44919.67 IOPS, 175.47 MiB/s [2024-12-06T18:15:53.400Z] 45082.50 IOPS, 176.10 MiB/s 00:17:42.824 Latency(us) 00:17:42.824 [2024-12-06T18:15:53.400Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:17:42.824 Job: xnvme_bdev (Core Mask 0x1, workload: randread, depth: 64, IO size: 4096) 00:17:42.824 xnvme_bdev : 5.00 45360.56 177.19 0.00 0.00 1407.02 309.26 8264.38 00:17:42.824 [2024-12-06T18:15:53.400Z] =================================================================================================================== 00:17:42.824 [2024-12-06T18:15:53.400Z] Total : 45360.56 177.19 0.00 0.00 1407.02 309.26 8264.38 00:17:43.762 18:15:54 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@15 -- # for io_pattern in "${io_pattern_ref[@]}" 00:17:43.762 18:15:54 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@17 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /dev/fd/62 -q 64 -w randwrite -t 5 -T xnvme_bdev -o 4096 00:17:43.762 18:15:54 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@17 -- # gen_conf 00:17:43.762 18:15:54 nvme_xnvme.xnvme_bdevperf -- dd/common.sh@31 -- # xtrace_disable 00:17:43.762 18:15:54 nvme_xnvme.xnvme_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:17:43.762 { 00:17:43.762 "subsystems": [ 00:17:43.762 { 00:17:43.762 "subsystem": "bdev", 00:17:43.762 "config": [ 00:17:43.762 { 00:17:43.762 "params": { 00:17:43.762 "io_mechanism": "io_uring", 00:17:43.762 "conserve_cpu": false, 00:17:43.762 "filename": "/dev/nvme0n1", 00:17:43.762 "name": "xnvme_bdev" 00:17:43.762 }, 00:17:43.762 "method": "bdev_xnvme_create" 00:17:43.762 }, 00:17:43.762 { 00:17:43.762 "method": "bdev_wait_for_examine" 00:17:43.762 } 00:17:43.762 ] 00:17:43.762 } 00:17:43.762 ] 00:17:43.762 } 00:17:43.762 [2024-12-06 18:15:54.278434] Starting SPDK v25.01-pre git sha1 1148849d6 / DPDK 24.03.0 initialization... 00:17:43.762 [2024-12-06 18:15:54.278581] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid71523 ] 00:17:44.021 [2024-12-06 18:15:54.461476] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:17:44.021 [2024-12-06 18:15:54.581771] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:17:44.588 Running I/O for 5 seconds... 00:17:46.460 32001.00 IOPS, 125.00 MiB/s [2024-12-06T18:15:58.008Z] 28640.50 IOPS, 111.88 MiB/s [2024-12-06T18:15:59.383Z] 28843.00 IOPS, 112.67 MiB/s [2024-12-06T18:16:00.320Z] 29184.25 IOPS, 114.00 MiB/s [2024-12-06T18:16:00.320Z] 29465.80 IOPS, 115.10 MiB/s 00:17:49.744 Latency(us) 00:17:49.744 [2024-12-06T18:16:00.320Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:17:49.744 Job: xnvme_bdev (Core Mask 0x1, workload: randwrite, depth: 64, IO size: 4096) 00:17:49.744 xnvme_bdev : 5.01 29428.97 114.96 0.00 0.00 2168.21 235.23 7001.03 00:17:49.744 [2024-12-06T18:16:00.320Z] =================================================================================================================== 00:17:49.744 [2024-12-06T18:16:00.320Z] Total : 29428.97 114.96 0.00 0.00 2168.21 235.23 7001.03 00:17:50.702 00:17:50.702 real 0m13.844s 00:17:50.702 user 0m6.505s 00:17:50.702 sys 0m7.140s 00:17:50.702 18:16:01 nvme_xnvme.xnvme_bdevperf -- common/autotest_common.sh@1130 -- # xtrace_disable 00:17:50.702 18:16:01 nvme_xnvme.xnvme_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:17:50.702 ************************************ 00:17:50.702 END TEST xnvme_bdevperf 00:17:50.702 ************************************ 00:17:50.702 18:16:01 nvme_xnvme -- xnvme/xnvme.sh@88 -- # run_test xnvme_fio_plugin xnvme_fio_plugin 00:17:50.702 18:16:01 nvme_xnvme -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:17:50.702 18:16:01 nvme_xnvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:17:50.702 18:16:01 nvme_xnvme -- common/autotest_common.sh@10 -- # set +x 00:17:50.702 ************************************ 00:17:50.702 START TEST xnvme_fio_plugin 00:17:50.702 ************************************ 00:17:50.702 18:16:01 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1129 -- # xnvme_fio_plugin 00:17:50.702 18:16:01 nvme_xnvme.xnvme_fio_plugin -- xnvme/xnvme.sh@27 -- # local io_pattern 00:17:50.703 18:16:01 nvme_xnvme.xnvme_fio_plugin -- xnvme/xnvme.sh@28 -- # local -n io_pattern_ref=io_uring_fio 00:17:50.703 18:16:01 nvme_xnvme.xnvme_fio_plugin -- xnvme/xnvme.sh@30 -- # for io_pattern in "${io_pattern_ref[@]}" 00:17:50.703 18:16:01 nvme_xnvme.xnvme_fio_plugin -- xnvme/xnvme.sh@32 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf=/dev/fd/62 --filename=xnvme_bdev --direct=1 --bs=4k --iodepth=64 --numjobs=1 --rw=randread --time_based --runtime=5 --thread=1 --name xnvme_bdev 00:17:50.703 18:16:01 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1360 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf=/dev/fd/62 --filename=xnvme_bdev --direct=1 --bs=4k --iodepth=64 --numjobs=1 --rw=randread --time_based --runtime=5 --thread=1 --name xnvme_bdev 00:17:50.703 18:16:01 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1341 -- # local fio_dir=/usr/src/fio 00:17:50.703 18:16:01 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1343 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:17:50.703 18:16:01 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1343 -- # local sanitizers 00:17:50.703 18:16:01 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1344 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:17:50.703 18:16:01 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1345 -- # shift 00:17:50.703 18:16:01 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1347 -- # local asan_lib= 00:17:50.703 18:16:01 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 00:17:50.703 18:16:01 nvme_xnvme.xnvme_fio_plugin -- xnvme/xnvme.sh@32 -- # gen_conf 00:17:50.703 18:16:01 nvme_xnvme.xnvme_fio_plugin -- dd/common.sh@31 -- # xtrace_disable 00:17:50.703 18:16:01 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@10 -- # set +x 00:17:50.703 18:16:01 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1349 -- # awk '{print $3}' 00:17:50.703 18:16:01 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1349 -- # grep libasan 00:17:50.703 18:16:01 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1349 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:17:50.703 18:16:01 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1349 -- # asan_lib=/usr/lib64/libasan.so.8 00:17:50.703 18:16:01 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1350 -- # [[ -n /usr/lib64/libasan.so.8 ]] 00:17:50.703 18:16:01 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1351 -- # break 00:17:50.703 18:16:01 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1356 -- # LD_PRELOAD='/usr/lib64/libasan.so.8 /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev' 00:17:50.703 18:16:01 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1356 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf=/dev/fd/62 --filename=xnvme_bdev --direct=1 --bs=4k --iodepth=64 --numjobs=1 --rw=randread --time_based --runtime=5 --thread=1 --name xnvme_bdev 00:17:50.703 { 00:17:50.703 "subsystems": [ 00:17:50.703 { 00:17:50.703 "subsystem": "bdev", 00:17:50.703 "config": [ 00:17:50.703 { 00:17:50.703 "params": { 00:17:50.703 "io_mechanism": "io_uring", 00:17:50.703 "conserve_cpu": false, 00:17:50.703 "filename": "/dev/nvme0n1", 00:17:50.703 "name": "xnvme_bdev" 00:17:50.703 }, 00:17:50.703 "method": "bdev_xnvme_create" 00:17:50.703 }, 00:17:50.703 { 00:17:50.703 "method": "bdev_wait_for_examine" 00:17:50.703 } 00:17:50.703 ] 00:17:50.703 } 00:17:50.703 ] 00:17:50.703 } 00:17:50.960 xnvme_bdev: (g=0): rw=randread, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=64 00:17:50.960 fio-3.35 00:17:50.960 Starting 1 thread 00:17:57.516 00:17:57.516 xnvme_bdev: (groupid=0, jobs=1): err= 0: pid=71648: Fri Dec 6 18:16:07 2024 00:17:57.516 read: IOPS=30.0k, BW=117MiB/s (123MB/s)(586MiB/5001msec) 00:17:57.516 slat (usec): min=4, max=122, avg= 5.98, stdev= 1.94 00:17:57.516 clat (usec): min=1177, max=3795, avg=1899.42, stdev=222.63 00:17:57.516 lat (usec): min=1184, max=3828, avg=1905.40, stdev=222.97 00:17:57.516 clat percentiles (usec): 00:17:57.516 | 1.00th=[ 1467], 5.00th=[ 1598], 10.00th=[ 1647], 20.00th=[ 1713], 00:17:57.516 | 30.00th=[ 1778], 40.00th=[ 1827], 50.00th=[ 1876], 60.00th=[ 1926], 00:17:57.516 | 70.00th=[ 1991], 80.00th=[ 2057], 90.00th=[ 2180], 95.00th=[ 2278], 00:17:57.516 | 99.00th=[ 2606], 99.50th=[ 2704], 99.90th=[ 2933], 99.95th=[ 3032], 00:17:57.516 | 99.99th=[ 3654] 00:17:57.516 bw ( KiB/s): min=113152, max=129024, per=99.73%, avg=119667.00, stdev=4362.38, samples=9 00:17:57.516 iops : min=28288, max=32256, avg=29916.67, stdev=1090.54, samples=9 00:17:57.516 lat (msec) : 2=72.29%, 4=27.71% 00:17:57.516 cpu : usr=32.88%, sys=66.06%, ctx=14, majf=0, minf=762 00:17:57.516 IO depths : 1=1.6%, 2=3.1%, 4=6.2%, 8=12.5%, 16=25.0%, 32=50.0%, >=64=1.6% 00:17:57.516 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:17:57.516 complete : 0=0.0%, 4=98.5%, 8=0.0%, 16=0.0%, 32=0.0%, 64=1.5%, >=64=0.0% 00:17:57.516 issued rwts: total=150016,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:17:57.516 latency : target=0, window=0, percentile=100.00%, depth=64 00:17:57.516 00:17:57.516 Run status group 0 (all jobs): 00:17:57.516 READ: bw=117MiB/s (123MB/s), 117MiB/s-117MiB/s (123MB/s-123MB/s), io=586MiB (614MB), run=5001-5001msec 00:17:58.101 ----------------------------------------------------- 00:17:58.101 Suppressions used: 00:17:58.101 count bytes template 00:17:58.101 1 11 /usr/src/fio/parse.c 00:17:58.101 1 8 libtcmalloc_minimal.so 00:17:58.101 1 904 libcrypto.so 00:17:58.101 ----------------------------------------------------- 00:17:58.101 00:17:58.101 18:16:08 nvme_xnvme.xnvme_fio_plugin -- xnvme/xnvme.sh@30 -- # for io_pattern in "${io_pattern_ref[@]}" 00:17:58.101 18:16:08 nvme_xnvme.xnvme_fio_plugin -- xnvme/xnvme.sh@32 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf=/dev/fd/62 --filename=xnvme_bdev --direct=1 --bs=4k --iodepth=64 --numjobs=1 --rw=randwrite --time_based --runtime=5 --thread=1 --name xnvme_bdev 00:17:58.101 18:16:08 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1360 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf=/dev/fd/62 --filename=xnvme_bdev --direct=1 --bs=4k --iodepth=64 --numjobs=1 --rw=randwrite --time_based --runtime=5 --thread=1 --name xnvme_bdev 00:17:58.101 18:16:08 nvme_xnvme.xnvme_fio_plugin -- xnvme/xnvme.sh@32 -- # gen_conf 00:17:58.101 18:16:08 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1341 -- # local fio_dir=/usr/src/fio 00:17:58.101 18:16:08 nvme_xnvme.xnvme_fio_plugin -- dd/common.sh@31 -- # xtrace_disable 00:17:58.101 18:16:08 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1343 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:17:58.101 18:16:08 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@10 -- # set +x 00:17:58.101 18:16:08 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1343 -- # local sanitizers 00:17:58.101 18:16:08 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1344 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:17:58.101 18:16:08 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1345 -- # shift 00:17:58.101 18:16:08 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1347 -- # local asan_lib= 00:17:58.101 18:16:08 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 00:17:58.101 18:16:08 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1349 -- # awk '{print $3}' 00:17:58.101 18:16:08 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1349 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:17:58.101 18:16:08 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1349 -- # grep libasan 00:17:58.101 { 00:17:58.101 "subsystems": [ 00:17:58.101 { 00:17:58.101 "subsystem": "bdev", 00:17:58.101 "config": [ 00:17:58.101 { 00:17:58.101 "params": { 00:17:58.101 "io_mechanism": "io_uring", 00:17:58.101 "conserve_cpu": false, 00:17:58.101 "filename": "/dev/nvme0n1", 00:17:58.101 "name": "xnvme_bdev" 00:17:58.101 }, 00:17:58.101 "method": "bdev_xnvme_create" 00:17:58.101 }, 00:17:58.101 { 00:17:58.101 "method": "bdev_wait_for_examine" 00:17:58.101 } 00:17:58.101 ] 00:17:58.101 } 00:17:58.101 ] 00:17:58.101 } 00:17:58.101 18:16:08 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1349 -- # asan_lib=/usr/lib64/libasan.so.8 00:17:58.101 18:16:08 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1350 -- # [[ -n /usr/lib64/libasan.so.8 ]] 00:17:58.101 18:16:08 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1351 -- # break 00:17:58.101 18:16:08 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1356 -- # LD_PRELOAD='/usr/lib64/libasan.so.8 /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev' 00:17:58.101 18:16:08 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1356 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf=/dev/fd/62 --filename=xnvme_bdev --direct=1 --bs=4k --iodepth=64 --numjobs=1 --rw=randwrite --time_based --runtime=5 --thread=1 --name xnvme_bdev 00:17:58.360 xnvme_bdev: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=64 00:17:58.360 fio-3.35 00:17:58.360 Starting 1 thread 00:18:04.928 00:18:04.928 xnvme_bdev: (groupid=0, jobs=1): err= 0: pid=71745: Fri Dec 6 18:16:14 2024 00:18:04.928 write: IOPS=28.5k, BW=111MiB/s (117MB/s)(557MiB/5002msec); 0 zone resets 00:18:04.928 slat (usec): min=3, max=100, avg= 6.27, stdev= 2.23 00:18:04.928 clat (usec): min=1326, max=5369, avg=1995.06, stdev=250.22 00:18:04.928 lat (usec): min=1331, max=5382, avg=2001.33, stdev=251.04 00:18:04.928 clat percentiles (usec): 00:18:04.928 | 1.00th=[ 1582], 5.00th=[ 1663], 10.00th=[ 1713], 20.00th=[ 1795], 00:18:04.928 | 30.00th=[ 1860], 40.00th=[ 1909], 50.00th=[ 1958], 60.00th=[ 2024], 00:18:04.928 | 70.00th=[ 2089], 80.00th=[ 2180], 90.00th=[ 2311], 95.00th=[ 2442], 00:18:04.928 | 99.00th=[ 2704], 99.50th=[ 2835], 99.90th=[ 3490], 99.95th=[ 3720], 00:18:04.928 | 99.99th=[ 5211] 00:18:04.928 bw ( KiB/s): min=101376, max=120320, per=99.69%, avg=113720.89, stdev=5680.92, samples=9 00:18:04.928 iops : min=25344, max=30080, avg=28430.22, stdev=1420.23, samples=9 00:18:04.928 lat (msec) : 2=56.98%, 4=42.97%, 10=0.04% 00:18:04.928 cpu : usr=31.59%, sys=67.31%, ctx=13, majf=0, minf=763 00:18:04.928 IO depths : 1=1.6%, 2=3.1%, 4=6.2%, 8=12.5%, 16=25.0%, 32=50.0%, >=64=1.6% 00:18:04.928 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:18:04.928 complete : 0=0.0%, 4=98.5%, 8=0.0%, 16=0.0%, 32=0.0%, 64=1.5%, >=64=0.0% 00:18:04.928 issued rwts: total=0,142656,0,0 short=0,0,0,0 dropped=0,0,0,0 00:18:04.928 latency : target=0, window=0, percentile=100.00%, depth=64 00:18:04.928 00:18:04.928 Run status group 0 (all jobs): 00:18:04.928 WRITE: bw=111MiB/s (117MB/s), 111MiB/s-111MiB/s (117MB/s-117MB/s), io=557MiB (584MB), run=5002-5002msec 00:18:05.534 ----------------------------------------------------- 00:18:05.534 Suppressions used: 00:18:05.534 count bytes template 00:18:05.534 1 11 /usr/src/fio/parse.c 00:18:05.534 1 8 libtcmalloc_minimal.so 00:18:05.534 1 904 libcrypto.so 00:18:05.534 ----------------------------------------------------- 00:18:05.534 00:18:05.534 00:18:05.534 real 0m14.694s 00:18:05.534 user 0m6.875s 00:18:05.534 sys 0m7.433s 00:18:05.534 18:16:15 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1130 -- # xtrace_disable 00:18:05.534 ************************************ 00:18:05.534 END TEST xnvme_fio_plugin 00:18:05.534 ************************************ 00:18:05.534 18:16:15 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@10 -- # set +x 00:18:05.534 18:16:15 nvme_xnvme -- xnvme/xnvme.sh@82 -- # for cc in "${xnvme_conserve_cpu[@]}" 00:18:05.534 18:16:15 nvme_xnvme -- xnvme/xnvme.sh@83 -- # method_bdev_xnvme_create_0["conserve_cpu"]=true 00:18:05.534 18:16:15 nvme_xnvme -- xnvme/xnvme.sh@84 -- # conserve_cpu=true 00:18:05.534 18:16:15 nvme_xnvme -- xnvme/xnvme.sh@86 -- # run_test xnvme_rpc xnvme_rpc 00:18:05.534 18:16:15 nvme_xnvme -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:18:05.534 18:16:15 nvme_xnvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:18:05.534 18:16:15 nvme_xnvme -- common/autotest_common.sh@10 -- # set +x 00:18:05.534 ************************************ 00:18:05.534 START TEST xnvme_rpc 00:18:05.534 ************************************ 00:18:05.534 18:16:15 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@1129 -- # xnvme_rpc 00:18:05.534 18:16:15 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@48 -- # cc=() 00:18:05.534 18:16:15 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@48 -- # local -A cc 00:18:05.534 18:16:15 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@50 -- # cc["false"]= 00:18:05.534 18:16:15 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@50 -- # cc["true"]=-c 00:18:05.534 18:16:15 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@53 -- # spdk_tgt=71831 00:18:05.534 18:16:15 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@52 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:18:05.534 18:16:15 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@54 -- # waitforlisten 71831 00:18:05.534 18:16:15 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@835 -- # '[' -z 71831 ']' 00:18:05.534 18:16:15 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:18:05.534 18:16:15 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@840 -- # local max_retries=100 00:18:05.534 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:18:05.534 18:16:15 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:18:05.534 18:16:15 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@844 -- # xtrace_disable 00:18:05.534 18:16:15 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@10 -- # set +x 00:18:05.534 [2024-12-06 18:16:16.033773] Starting SPDK v25.01-pre git sha1 1148849d6 / DPDK 24.03.0 initialization... 00:18:05.534 [2024-12-06 18:16:16.033919] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid71831 ] 00:18:05.793 [2024-12-06 18:16:16.212627] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:18:05.793 [2024-12-06 18:16:16.323476] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:18:06.727 18:16:17 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:18:06.727 18:16:17 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@868 -- # return 0 00:18:06.727 18:16:17 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@56 -- # rpc_cmd bdev_xnvme_create /dev/nvme0n1 xnvme_bdev io_uring -c 00:18:06.727 18:16:17 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:06.727 18:16:17 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@10 -- # set +x 00:18:06.727 xnvme_bdev 00:18:06.727 18:16:17 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:06.727 18:16:17 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@62 -- # rpc_xnvme name 00:18:06.727 18:16:17 nvme_xnvme.xnvme_rpc -- xnvme/common.sh@65 -- # rpc_cmd framework_get_config bdev 00:18:06.727 18:16:17 nvme_xnvme.xnvme_rpc -- xnvme/common.sh@66 -- # jq -r '.[] | select(.method == "bdev_xnvme_create").params.name' 00:18:06.727 18:16:17 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:06.727 18:16:17 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@10 -- # set +x 00:18:06.727 18:16:17 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:06.727 18:16:17 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@62 -- # [[ xnvme_bdev == \x\n\v\m\e\_\b\d\e\v ]] 00:18:06.727 18:16:17 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@63 -- # rpc_xnvme filename 00:18:06.727 18:16:17 nvme_xnvme.xnvme_rpc -- xnvme/common.sh@65 -- # rpc_cmd framework_get_config bdev 00:18:06.727 18:16:17 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:06.727 18:16:17 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@10 -- # set +x 00:18:06.727 18:16:17 nvme_xnvme.xnvme_rpc -- xnvme/common.sh@66 -- # jq -r '.[] | select(.method == "bdev_xnvme_create").params.filename' 00:18:06.727 18:16:17 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:06.727 18:16:17 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@63 -- # [[ /dev/nvme0n1 == \/\d\e\v\/\n\v\m\e\0\n\1 ]] 00:18:06.727 18:16:17 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@64 -- # rpc_xnvme io_mechanism 00:18:06.727 18:16:17 nvme_xnvme.xnvme_rpc -- xnvme/common.sh@65 -- # rpc_cmd framework_get_config bdev 00:18:06.727 18:16:17 nvme_xnvme.xnvme_rpc -- xnvme/common.sh@66 -- # jq -r '.[] | select(.method == "bdev_xnvme_create").params.io_mechanism' 00:18:06.727 18:16:17 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:06.727 18:16:17 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@10 -- # set +x 00:18:06.727 18:16:17 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:06.727 18:16:17 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@64 -- # [[ io_uring == \i\o\_\u\r\i\n\g ]] 00:18:06.727 18:16:17 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@65 -- # rpc_xnvme conserve_cpu 00:18:06.987 18:16:17 nvme_xnvme.xnvme_rpc -- xnvme/common.sh@65 -- # rpc_cmd framework_get_config bdev 00:18:06.987 18:16:17 nvme_xnvme.xnvme_rpc -- xnvme/common.sh@66 -- # jq -r '.[] | select(.method == "bdev_xnvme_create").params.conserve_cpu' 00:18:06.987 18:16:17 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:06.987 18:16:17 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@10 -- # set +x 00:18:06.987 18:16:17 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:06.987 18:16:17 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@65 -- # [[ true == \t\r\u\e ]] 00:18:06.987 18:16:17 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@67 -- # rpc_cmd bdev_xnvme_delete xnvme_bdev 00:18:06.987 18:16:17 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:06.987 18:16:17 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@10 -- # set +x 00:18:06.987 18:16:17 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:06.987 18:16:17 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@70 -- # killprocess 71831 00:18:06.987 18:16:17 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@954 -- # '[' -z 71831 ']' 00:18:06.987 18:16:17 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@958 -- # kill -0 71831 00:18:06.987 18:16:17 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@959 -- # uname 00:18:06.987 18:16:17 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:18:06.987 18:16:17 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 71831 00:18:06.987 18:16:17 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:18:06.987 18:16:17 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:18:06.987 killing process with pid 71831 00:18:06.987 18:16:17 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@972 -- # echo 'killing process with pid 71831' 00:18:06.987 18:16:17 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@973 -- # kill 71831 00:18:06.987 18:16:17 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@978 -- # wait 71831 00:18:09.558 00:18:09.558 real 0m3.835s 00:18:09.559 user 0m3.886s 00:18:09.559 sys 0m0.522s 00:18:09.559 18:16:19 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@1130 -- # xtrace_disable 00:18:09.559 18:16:19 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@10 -- # set +x 00:18:09.559 ************************************ 00:18:09.559 END TEST xnvme_rpc 00:18:09.559 ************************************ 00:18:09.559 18:16:19 nvme_xnvme -- xnvme/xnvme.sh@87 -- # run_test xnvme_bdevperf xnvme_bdevperf 00:18:09.559 18:16:19 nvme_xnvme -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:18:09.559 18:16:19 nvme_xnvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:18:09.559 18:16:19 nvme_xnvme -- common/autotest_common.sh@10 -- # set +x 00:18:09.559 ************************************ 00:18:09.559 START TEST xnvme_bdevperf 00:18:09.559 ************************************ 00:18:09.559 18:16:19 nvme_xnvme.xnvme_bdevperf -- common/autotest_common.sh@1129 -- # xnvme_bdevperf 00:18:09.559 18:16:19 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@12 -- # local io_pattern 00:18:09.559 18:16:19 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@13 -- # local -n io_pattern_ref=io_uring 00:18:09.559 18:16:19 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@15 -- # for io_pattern in "${io_pattern_ref[@]}" 00:18:09.559 18:16:19 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@17 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /dev/fd/62 -q 64 -w randread -t 5 -T xnvme_bdev -o 4096 00:18:09.559 18:16:19 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@17 -- # gen_conf 00:18:09.559 18:16:19 nvme_xnvme.xnvme_bdevperf -- dd/common.sh@31 -- # xtrace_disable 00:18:09.559 18:16:19 nvme_xnvme.xnvme_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:18:09.559 { 00:18:09.559 "subsystems": [ 00:18:09.559 { 00:18:09.559 "subsystem": "bdev", 00:18:09.559 "config": [ 00:18:09.559 { 00:18:09.559 "params": { 00:18:09.559 "io_mechanism": "io_uring", 00:18:09.559 "conserve_cpu": true, 00:18:09.559 "filename": "/dev/nvme0n1", 00:18:09.559 "name": "xnvme_bdev" 00:18:09.559 }, 00:18:09.559 "method": "bdev_xnvme_create" 00:18:09.559 }, 00:18:09.559 { 00:18:09.559 "method": "bdev_wait_for_examine" 00:18:09.559 } 00:18:09.559 ] 00:18:09.559 } 00:18:09.559 ] 00:18:09.559 } 00:18:09.559 [2024-12-06 18:16:19.924414] Starting SPDK v25.01-pre git sha1 1148849d6 / DPDK 24.03.0 initialization... 00:18:09.559 [2024-12-06 18:16:19.924546] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid71911 ] 00:18:09.559 [2024-12-06 18:16:20.104803] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:18:09.823 [2024-12-06 18:16:20.220697] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:18:10.082 Running I/O for 5 seconds... 00:18:12.395 47168.00 IOPS, 184.25 MiB/s [2024-12-06T18:16:23.905Z] 45024.00 IOPS, 175.88 MiB/s [2024-12-06T18:16:24.841Z] 45781.33 IOPS, 178.83 MiB/s [2024-12-06T18:16:25.776Z] 44784.00 IOPS, 174.94 MiB/s 00:18:15.200 Latency(us) 00:18:15.200 [2024-12-06T18:16:25.776Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:18:15.200 Job: xnvme_bdev (Core Mask 0x1, workload: randread, depth: 64, IO size: 4096) 00:18:15.200 xnvme_bdev : 5.00 44036.37 172.02 0.00 0.00 1449.45 750.11 4526.98 00:18:15.200 [2024-12-06T18:16:25.776Z] =================================================================================================================== 00:18:15.200 [2024-12-06T18:16:25.776Z] Total : 44036.37 172.02 0.00 0.00 1449.45 750.11 4526.98 00:18:16.133 18:16:26 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@15 -- # for io_pattern in "${io_pattern_ref[@]}" 00:18:16.133 18:16:26 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@17 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /dev/fd/62 -q 64 -w randwrite -t 5 -T xnvme_bdev -o 4096 00:18:16.133 18:16:26 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@17 -- # gen_conf 00:18:16.133 18:16:26 nvme_xnvme.xnvme_bdevperf -- dd/common.sh@31 -- # xtrace_disable 00:18:16.133 18:16:26 nvme_xnvme.xnvme_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:18:16.391 { 00:18:16.391 "subsystems": [ 00:18:16.391 { 00:18:16.391 "subsystem": "bdev", 00:18:16.391 "config": [ 00:18:16.391 { 00:18:16.391 "params": { 00:18:16.391 "io_mechanism": "io_uring", 00:18:16.391 "conserve_cpu": true, 00:18:16.391 "filename": "/dev/nvme0n1", 00:18:16.391 "name": "xnvme_bdev" 00:18:16.391 }, 00:18:16.391 "method": "bdev_xnvme_create" 00:18:16.391 }, 00:18:16.391 { 00:18:16.391 "method": "bdev_wait_for_examine" 00:18:16.391 } 00:18:16.391 ] 00:18:16.391 } 00:18:16.391 ] 00:18:16.391 } 00:18:16.391 [2024-12-06 18:16:26.787417] Starting SPDK v25.01-pre git sha1 1148849d6 / DPDK 24.03.0 initialization... 00:18:16.391 [2024-12-06 18:16:26.787558] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid71986 ] 00:18:16.391 [2024-12-06 18:16:26.966301] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:18:16.650 [2024-12-06 18:16:27.082853] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:18:16.909 Running I/O for 5 seconds... 00:18:19.221 35200.00 IOPS, 137.50 MiB/s [2024-12-06T18:16:30.734Z] 33728.00 IOPS, 131.75 MiB/s [2024-12-06T18:16:31.671Z] 34240.00 IOPS, 133.75 MiB/s [2024-12-06T18:16:32.606Z] 34336.00 IOPS, 134.12 MiB/s [2024-12-06T18:16:32.606Z] 34112.00 IOPS, 133.25 MiB/s 00:18:22.030 Latency(us) 00:18:22.030 [2024-12-06T18:16:32.606Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:18:22.030 Job: xnvme_bdev (Core Mask 0x1, workload: randwrite, depth: 64, IO size: 4096) 00:18:22.031 xnvme_bdev : 5.00 34098.98 133.20 0.00 0.00 1871.61 980.41 7369.51 00:18:22.031 [2024-12-06T18:16:32.607Z] =================================================================================================================== 00:18:22.031 [2024-12-06T18:16:32.607Z] Total : 34098.98 133.20 0.00 0.00 1871.61 980.41 7369.51 00:18:22.967 00:18:22.967 real 0m13.707s 00:18:22.967 user 0m7.764s 00:18:22.967 sys 0m5.495s 00:18:22.967 ************************************ 00:18:22.967 END TEST xnvme_bdevperf 00:18:22.967 ************************************ 00:18:22.967 18:16:33 nvme_xnvme.xnvme_bdevperf -- common/autotest_common.sh@1130 -- # xtrace_disable 00:18:22.967 18:16:33 nvme_xnvme.xnvme_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:18:23.226 18:16:33 nvme_xnvme -- xnvme/xnvme.sh@88 -- # run_test xnvme_fio_plugin xnvme_fio_plugin 00:18:23.226 18:16:33 nvme_xnvme -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:18:23.226 18:16:33 nvme_xnvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:18:23.226 18:16:33 nvme_xnvme -- common/autotest_common.sh@10 -- # set +x 00:18:23.226 ************************************ 00:18:23.226 START TEST xnvme_fio_plugin 00:18:23.226 ************************************ 00:18:23.226 18:16:33 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1129 -- # xnvme_fio_plugin 00:18:23.226 18:16:33 nvme_xnvme.xnvme_fio_plugin -- xnvme/xnvme.sh@27 -- # local io_pattern 00:18:23.226 18:16:33 nvme_xnvme.xnvme_fio_plugin -- xnvme/xnvme.sh@28 -- # local -n io_pattern_ref=io_uring_fio 00:18:23.226 18:16:33 nvme_xnvme.xnvme_fio_plugin -- xnvme/xnvme.sh@30 -- # for io_pattern in "${io_pattern_ref[@]}" 00:18:23.226 18:16:33 nvme_xnvme.xnvme_fio_plugin -- xnvme/xnvme.sh@32 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf=/dev/fd/62 --filename=xnvme_bdev --direct=1 --bs=4k --iodepth=64 --numjobs=1 --rw=randread --time_based --runtime=5 --thread=1 --name xnvme_bdev 00:18:23.226 18:16:33 nvme_xnvme.xnvme_fio_plugin -- xnvme/xnvme.sh@32 -- # gen_conf 00:18:23.226 18:16:33 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1360 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf=/dev/fd/62 --filename=xnvme_bdev --direct=1 --bs=4k --iodepth=64 --numjobs=1 --rw=randread --time_based --runtime=5 --thread=1 --name xnvme_bdev 00:18:23.226 18:16:33 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1341 -- # local fio_dir=/usr/src/fio 00:18:23.226 18:16:33 nvme_xnvme.xnvme_fio_plugin -- dd/common.sh@31 -- # xtrace_disable 00:18:23.226 18:16:33 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1343 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:18:23.226 18:16:33 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@10 -- # set +x 00:18:23.226 18:16:33 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1343 -- # local sanitizers 00:18:23.226 18:16:33 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1344 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:18:23.226 18:16:33 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1345 -- # shift 00:18:23.226 18:16:33 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1347 -- # local asan_lib= 00:18:23.226 18:16:33 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 00:18:23.226 18:16:33 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1349 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:18:23.226 18:16:33 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1349 -- # grep libasan 00:18:23.226 18:16:33 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1349 -- # awk '{print $3}' 00:18:23.226 18:16:33 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1349 -- # asan_lib=/usr/lib64/libasan.so.8 00:18:23.226 18:16:33 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1350 -- # [[ -n /usr/lib64/libasan.so.8 ]] 00:18:23.226 18:16:33 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1351 -- # break 00:18:23.226 18:16:33 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1356 -- # LD_PRELOAD='/usr/lib64/libasan.so.8 /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev' 00:18:23.226 18:16:33 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1356 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf=/dev/fd/62 --filename=xnvme_bdev --direct=1 --bs=4k --iodepth=64 --numjobs=1 --rw=randread --time_based --runtime=5 --thread=1 --name xnvme_bdev 00:18:23.226 { 00:18:23.226 "subsystems": [ 00:18:23.226 { 00:18:23.226 "subsystem": "bdev", 00:18:23.226 "config": [ 00:18:23.226 { 00:18:23.226 "params": { 00:18:23.226 "io_mechanism": "io_uring", 00:18:23.226 "conserve_cpu": true, 00:18:23.226 "filename": "/dev/nvme0n1", 00:18:23.226 "name": "xnvme_bdev" 00:18:23.226 }, 00:18:23.226 "method": "bdev_xnvme_create" 00:18:23.226 }, 00:18:23.226 { 00:18:23.226 "method": "bdev_wait_for_examine" 00:18:23.226 } 00:18:23.226 ] 00:18:23.226 } 00:18:23.226 ] 00:18:23.226 } 00:18:23.485 xnvme_bdev: (g=0): rw=randread, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=64 00:18:23.485 fio-3.35 00:18:23.485 Starting 1 thread 00:18:30.051 00:18:30.051 xnvme_bdev: (groupid=0, jobs=1): err= 0: pid=72111: Fri Dec 6 18:16:39 2024 00:18:30.051 read: IOPS=33.0k, BW=129MiB/s (135MB/s)(644MiB/5001msec) 00:18:30.051 slat (nsec): min=3574, max=72076, avg=5126.85, stdev=2026.97 00:18:30.051 clat (usec): min=1226, max=4085, avg=1738.46, stdev=305.36 00:18:30.051 lat (usec): min=1230, max=4092, avg=1743.59, stdev=306.46 00:18:30.051 clat percentiles (usec): 00:18:30.051 | 1.00th=[ 1319], 5.00th=[ 1385], 10.00th=[ 1434], 20.00th=[ 1500], 00:18:30.051 | 30.00th=[ 1549], 40.00th=[ 1598], 50.00th=[ 1647], 60.00th=[ 1713], 00:18:30.051 | 70.00th=[ 1827], 80.00th=[ 1975], 90.00th=[ 2180], 95.00th=[ 2376], 00:18:30.051 | 99.00th=[ 2671], 99.50th=[ 2737], 99.90th=[ 2966], 99.95th=[ 3359], 00:18:30.051 | 99.99th=[ 4015] 00:18:30.051 bw ( KiB/s): min=110080, max=153600, per=100.00%, avg=133547.11, stdev=14962.06, samples=9 00:18:30.051 iops : min=27520, max=38400, avg=33386.78, stdev=3740.51, samples=9 00:18:30.051 lat (msec) : 2=81.50%, 4=18.49%, 10=0.01% 00:18:30.051 cpu : usr=47.98%, sys=48.44%, ctx=19, majf=0, minf=762 00:18:30.051 IO depths : 1=1.6%, 2=3.1%, 4=6.2%, 8=12.5%, 16=25.0%, 32=50.0%, >=64=1.6% 00:18:30.051 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:18:30.051 complete : 0=0.0%, 4=98.5%, 8=0.0%, 16=0.0%, 32=0.0%, 64=1.5%, >=64=0.0% 00:18:30.051 issued rwts: total=164800,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:18:30.051 latency : target=0, window=0, percentile=100.00%, depth=64 00:18:30.051 00:18:30.051 Run status group 0 (all jobs): 00:18:30.051 READ: bw=129MiB/s (135MB/s), 129MiB/s-129MiB/s (135MB/s-135MB/s), io=644MiB (675MB), run=5001-5001msec 00:18:30.620 ----------------------------------------------------- 00:18:30.620 Suppressions used: 00:18:30.620 count bytes template 00:18:30.620 1 11 /usr/src/fio/parse.c 00:18:30.620 1 8 libtcmalloc_minimal.so 00:18:30.620 1 904 libcrypto.so 00:18:30.621 ----------------------------------------------------- 00:18:30.621 00:18:30.621 18:16:40 nvme_xnvme.xnvme_fio_plugin -- xnvme/xnvme.sh@30 -- # for io_pattern in "${io_pattern_ref[@]}" 00:18:30.621 18:16:40 nvme_xnvme.xnvme_fio_plugin -- xnvme/xnvme.sh@32 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf=/dev/fd/62 --filename=xnvme_bdev --direct=1 --bs=4k --iodepth=64 --numjobs=1 --rw=randwrite --time_based --runtime=5 --thread=1 --name xnvme_bdev 00:18:30.621 18:16:40 nvme_xnvme.xnvme_fio_plugin -- xnvme/xnvme.sh@32 -- # gen_conf 00:18:30.621 18:16:40 nvme_xnvme.xnvme_fio_plugin -- dd/common.sh@31 -- # xtrace_disable 00:18:30.621 18:16:40 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1360 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf=/dev/fd/62 --filename=xnvme_bdev --direct=1 --bs=4k --iodepth=64 --numjobs=1 --rw=randwrite --time_based --runtime=5 --thread=1 --name xnvme_bdev 00:18:30.621 18:16:40 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@10 -- # set +x 00:18:30.621 18:16:40 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1341 -- # local fio_dir=/usr/src/fio 00:18:30.621 18:16:40 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1343 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:18:30.621 18:16:40 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1343 -- # local sanitizers 00:18:30.621 18:16:40 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1344 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:18:30.621 18:16:40 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1345 -- # shift 00:18:30.621 18:16:40 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1347 -- # local asan_lib= 00:18:30.621 18:16:40 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 00:18:30.621 18:16:40 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1349 -- # awk '{print $3}' 00:18:30.621 18:16:40 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1349 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:18:30.621 18:16:40 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1349 -- # grep libasan 00:18:30.621 { 00:18:30.621 "subsystems": [ 00:18:30.621 { 00:18:30.621 "subsystem": "bdev", 00:18:30.621 "config": [ 00:18:30.621 { 00:18:30.621 "params": { 00:18:30.621 "io_mechanism": "io_uring", 00:18:30.621 "conserve_cpu": true, 00:18:30.621 "filename": "/dev/nvme0n1", 00:18:30.621 "name": "xnvme_bdev" 00:18:30.621 }, 00:18:30.621 "method": "bdev_xnvme_create" 00:18:30.621 }, 00:18:30.621 { 00:18:30.621 "method": "bdev_wait_for_examine" 00:18:30.621 } 00:18:30.621 ] 00:18:30.621 } 00:18:30.621 ] 00:18:30.621 } 00:18:30.621 18:16:40 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1349 -- # asan_lib=/usr/lib64/libasan.so.8 00:18:30.621 18:16:40 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1350 -- # [[ -n /usr/lib64/libasan.so.8 ]] 00:18:30.621 18:16:40 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1351 -- # break 00:18:30.621 18:16:40 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1356 -- # LD_PRELOAD='/usr/lib64/libasan.so.8 /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev' 00:18:30.621 18:16:40 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1356 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf=/dev/fd/62 --filename=xnvme_bdev --direct=1 --bs=4k --iodepth=64 --numjobs=1 --rw=randwrite --time_based --runtime=5 --thread=1 --name xnvme_bdev 00:18:30.621 xnvme_bdev: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=64 00:18:30.621 fio-3.35 00:18:30.621 Starting 1 thread 00:18:37.186 00:18:37.186 xnvme_bdev: (groupid=0, jobs=1): err= 0: pid=72207: Fri Dec 6 18:16:46 2024 00:18:37.186 write: IOPS=35.9k, BW=140MiB/s (147MB/s)(702MiB/5001msec); 0 zone resets 00:18:37.186 slat (usec): min=2, max=612, avg= 4.82, stdev= 2.96 00:18:37.186 clat (usec): min=738, max=3482, avg=1590.56, stdev=300.63 00:18:37.186 lat (usec): min=742, max=3496, avg=1595.38, stdev=301.66 00:18:37.186 clat percentiles (usec): 00:18:37.186 | 1.00th=[ 979], 5.00th=[ 1123], 10.00th=[ 1237], 20.00th=[ 1369], 00:18:37.186 | 30.00th=[ 1450], 40.00th=[ 1500], 50.00th=[ 1565], 60.00th=[ 1631], 00:18:37.186 | 70.00th=[ 1696], 80.00th=[ 1795], 90.00th=[ 1991], 95.00th=[ 2147], 00:18:37.186 | 99.00th=[ 2474], 99.50th=[ 2573], 99.90th=[ 2769], 99.95th=[ 2868], 00:18:37.186 | 99.99th=[ 3261] 00:18:37.186 bw ( KiB/s): min=124928, max=159744, per=100.00%, avg=145463.00, stdev=12170.94, samples=9 00:18:37.186 iops : min=31232, max=39936, avg=36365.67, stdev=3042.84, samples=9 00:18:37.186 lat (usec) : 750=0.01%, 1000=1.47% 00:18:37.186 lat (msec) : 2=89.11%, 4=9.42% 00:18:37.186 cpu : usr=47.62%, sys=48.98%, ctx=18, majf=0, minf=763 00:18:37.186 IO depths : 1=1.6%, 2=3.1%, 4=6.2%, 8=12.5%, 16=25.0%, 32=50.1%, >=64=1.6% 00:18:37.186 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:18:37.186 complete : 0=0.0%, 4=98.5%, 8=0.0%, 16=0.0%, 32=0.1%, 64=1.5%, >=64=0.0% 00:18:37.186 issued rwts: total=0,179648,0,0 short=0,0,0,0 dropped=0,0,0,0 00:18:37.186 latency : target=0, window=0, percentile=100.00%, depth=64 00:18:37.186 00:18:37.186 Run status group 0 (all jobs): 00:18:37.186 WRITE: bw=140MiB/s (147MB/s), 140MiB/s-140MiB/s (147MB/s-147MB/s), io=702MiB (736MB), run=5001-5001msec 00:18:37.786 ----------------------------------------------------- 00:18:37.786 Suppressions used: 00:18:37.786 count bytes template 00:18:37.786 1 11 /usr/src/fio/parse.c 00:18:37.786 1 8 libtcmalloc_minimal.so 00:18:37.787 1 904 libcrypto.so 00:18:37.787 ----------------------------------------------------- 00:18:37.787 00:18:37.787 00:18:37.787 real 0m14.709s 00:18:37.787 user 0m8.490s 00:18:37.787 sys 0m5.617s 00:18:37.787 18:16:48 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1130 -- # xtrace_disable 00:18:37.787 18:16:48 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@10 -- # set +x 00:18:37.787 ************************************ 00:18:37.787 END TEST xnvme_fio_plugin 00:18:37.787 ************************************ 00:18:38.046 18:16:48 nvme_xnvme -- xnvme/xnvme.sh@75 -- # for io in "${xnvme_io[@]}" 00:18:38.046 18:16:48 nvme_xnvme -- xnvme/xnvme.sh@76 -- # method_bdev_xnvme_create_0["io_mechanism"]=io_uring_cmd 00:18:38.046 18:16:48 nvme_xnvme -- xnvme/xnvme.sh@77 -- # method_bdev_xnvme_create_0["filename"]=/dev/ng0n1 00:18:38.046 18:16:48 nvme_xnvme -- xnvme/xnvme.sh@79 -- # filename=/dev/ng0n1 00:18:38.046 18:16:48 nvme_xnvme -- xnvme/xnvme.sh@80 -- # name=xnvme_bdev 00:18:38.046 18:16:48 nvme_xnvme -- xnvme/xnvme.sh@82 -- # for cc in "${xnvme_conserve_cpu[@]}" 00:18:38.046 18:16:48 nvme_xnvme -- xnvme/xnvme.sh@83 -- # method_bdev_xnvme_create_0["conserve_cpu"]=false 00:18:38.046 18:16:48 nvme_xnvme -- xnvme/xnvme.sh@84 -- # conserve_cpu=false 00:18:38.047 18:16:48 nvme_xnvme -- xnvme/xnvme.sh@86 -- # run_test xnvme_rpc xnvme_rpc 00:18:38.047 18:16:48 nvme_xnvme -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:18:38.047 18:16:48 nvme_xnvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:18:38.047 18:16:48 nvme_xnvme -- common/autotest_common.sh@10 -- # set +x 00:18:38.047 ************************************ 00:18:38.047 START TEST xnvme_rpc 00:18:38.047 ************************************ 00:18:38.047 18:16:48 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@1129 -- # xnvme_rpc 00:18:38.047 18:16:48 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@48 -- # cc=() 00:18:38.047 18:16:48 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@48 -- # local -A cc 00:18:38.047 18:16:48 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@50 -- # cc["false"]= 00:18:38.047 18:16:48 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@50 -- # cc["true"]=-c 00:18:38.047 18:16:48 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@53 -- # spdk_tgt=72296 00:18:38.047 18:16:48 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@52 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:18:38.047 18:16:48 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@54 -- # waitforlisten 72296 00:18:38.047 18:16:48 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@835 -- # '[' -z 72296 ']' 00:18:38.047 18:16:48 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:18:38.047 18:16:48 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@840 -- # local max_retries=100 00:18:38.047 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:18:38.047 18:16:48 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:18:38.047 18:16:48 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@844 -- # xtrace_disable 00:18:38.047 18:16:48 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@10 -- # set +x 00:18:38.047 [2024-12-06 18:16:48.509503] Starting SPDK v25.01-pre git sha1 1148849d6 / DPDK 24.03.0 initialization... 00:18:38.047 [2024-12-06 18:16:48.510095] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid72296 ] 00:18:38.307 [2024-12-06 18:16:48.690583] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:18:38.307 [2024-12-06 18:16:48.802911] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:18:39.245 18:16:49 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:18:39.245 18:16:49 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@868 -- # return 0 00:18:39.245 18:16:49 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@56 -- # rpc_cmd bdev_xnvme_create /dev/ng0n1 xnvme_bdev io_uring_cmd '' 00:18:39.245 18:16:49 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:39.245 18:16:49 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@10 -- # set +x 00:18:39.245 xnvme_bdev 00:18:39.245 18:16:49 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:39.245 18:16:49 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@62 -- # rpc_xnvme name 00:18:39.245 18:16:49 nvme_xnvme.xnvme_rpc -- xnvme/common.sh@65 -- # rpc_cmd framework_get_config bdev 00:18:39.245 18:16:49 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:39.245 18:16:49 nvme_xnvme.xnvme_rpc -- xnvme/common.sh@66 -- # jq -r '.[] | select(.method == "bdev_xnvme_create").params.name' 00:18:39.245 18:16:49 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@10 -- # set +x 00:18:39.245 18:16:49 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:39.245 18:16:49 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@62 -- # [[ xnvme_bdev == \x\n\v\m\e\_\b\d\e\v ]] 00:18:39.245 18:16:49 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@63 -- # rpc_xnvme filename 00:18:39.245 18:16:49 nvme_xnvme.xnvme_rpc -- xnvme/common.sh@66 -- # jq -r '.[] | select(.method == "bdev_xnvme_create").params.filename' 00:18:39.245 18:16:49 nvme_xnvme.xnvme_rpc -- xnvme/common.sh@65 -- # rpc_cmd framework_get_config bdev 00:18:39.245 18:16:49 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:39.245 18:16:49 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@10 -- # set +x 00:18:39.245 18:16:49 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:39.245 18:16:49 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@63 -- # [[ /dev/ng0n1 == \/\d\e\v\/\n\g\0\n\1 ]] 00:18:39.245 18:16:49 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@64 -- # rpc_xnvme io_mechanism 00:18:39.245 18:16:49 nvme_xnvme.xnvme_rpc -- xnvme/common.sh@65 -- # rpc_cmd framework_get_config bdev 00:18:39.245 18:16:49 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:39.245 18:16:49 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@10 -- # set +x 00:18:39.245 18:16:49 nvme_xnvme.xnvme_rpc -- xnvme/common.sh@66 -- # jq -r '.[] | select(.method == "bdev_xnvme_create").params.io_mechanism' 00:18:39.245 18:16:49 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:39.245 18:16:49 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@64 -- # [[ io_uring_cmd == \i\o\_\u\r\i\n\g\_\c\m\d ]] 00:18:39.245 18:16:49 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@65 -- # rpc_xnvme conserve_cpu 00:18:39.245 18:16:49 nvme_xnvme.xnvme_rpc -- xnvme/common.sh@65 -- # rpc_cmd framework_get_config bdev 00:18:39.245 18:16:49 nvme_xnvme.xnvme_rpc -- xnvme/common.sh@66 -- # jq -r '.[] | select(.method == "bdev_xnvme_create").params.conserve_cpu' 00:18:39.245 18:16:49 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:39.245 18:16:49 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@10 -- # set +x 00:18:39.504 18:16:49 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:39.504 18:16:49 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@65 -- # [[ false == \f\a\l\s\e ]] 00:18:39.504 18:16:49 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@67 -- # rpc_cmd bdev_xnvme_delete xnvme_bdev 00:18:39.504 18:16:49 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:39.504 18:16:49 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@10 -- # set +x 00:18:39.504 18:16:49 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:39.504 18:16:49 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@70 -- # killprocess 72296 00:18:39.504 18:16:49 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@954 -- # '[' -z 72296 ']' 00:18:39.504 18:16:49 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@958 -- # kill -0 72296 00:18:39.504 18:16:49 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@959 -- # uname 00:18:39.504 18:16:49 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:18:39.504 18:16:49 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 72296 00:18:39.504 18:16:49 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:18:39.504 18:16:49 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:18:39.504 killing process with pid 72296 00:18:39.504 18:16:49 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@972 -- # echo 'killing process with pid 72296' 00:18:39.504 18:16:49 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@973 -- # kill 72296 00:18:39.504 18:16:49 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@978 -- # wait 72296 00:18:42.034 00:18:42.034 real 0m3.887s 00:18:42.034 user 0m3.920s 00:18:42.034 sys 0m0.560s 00:18:42.034 18:16:52 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@1130 -- # xtrace_disable 00:18:42.034 18:16:52 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@10 -- # set +x 00:18:42.034 ************************************ 00:18:42.034 END TEST xnvme_rpc 00:18:42.034 ************************************ 00:18:42.034 18:16:52 nvme_xnvme -- xnvme/xnvme.sh@87 -- # run_test xnvme_bdevperf xnvme_bdevperf 00:18:42.034 18:16:52 nvme_xnvme -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:18:42.034 18:16:52 nvme_xnvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:18:42.034 18:16:52 nvme_xnvme -- common/autotest_common.sh@10 -- # set +x 00:18:42.034 ************************************ 00:18:42.034 START TEST xnvme_bdevperf 00:18:42.034 ************************************ 00:18:42.034 18:16:52 nvme_xnvme.xnvme_bdevperf -- common/autotest_common.sh@1129 -- # xnvme_bdevperf 00:18:42.035 18:16:52 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@12 -- # local io_pattern 00:18:42.035 18:16:52 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@13 -- # local -n io_pattern_ref=io_uring_cmd 00:18:42.035 18:16:52 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@15 -- # for io_pattern in "${io_pattern_ref[@]}" 00:18:42.035 18:16:52 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@17 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /dev/fd/62 -q 64 -w randread -t 5 -T xnvme_bdev -o 4096 00:18:42.035 18:16:52 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@17 -- # gen_conf 00:18:42.035 18:16:52 nvme_xnvme.xnvme_bdevperf -- dd/common.sh@31 -- # xtrace_disable 00:18:42.035 18:16:52 nvme_xnvme.xnvme_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:18:42.035 { 00:18:42.035 "subsystems": [ 00:18:42.035 { 00:18:42.035 "subsystem": "bdev", 00:18:42.035 "config": [ 00:18:42.035 { 00:18:42.035 "params": { 00:18:42.035 "io_mechanism": "io_uring_cmd", 00:18:42.035 "conserve_cpu": false, 00:18:42.035 "filename": "/dev/ng0n1", 00:18:42.035 "name": "xnvme_bdev" 00:18:42.035 }, 00:18:42.035 "method": "bdev_xnvme_create" 00:18:42.035 }, 00:18:42.035 { 00:18:42.035 "method": "bdev_wait_for_examine" 00:18:42.035 } 00:18:42.035 ] 00:18:42.035 } 00:18:42.035 ] 00:18:42.035 } 00:18:42.035 [2024-12-06 18:16:52.452771] Starting SPDK v25.01-pre git sha1 1148849d6 / DPDK 24.03.0 initialization... 00:18:42.035 [2024-12-06 18:16:52.452908] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid72378 ] 00:18:42.293 [2024-12-06 18:16:52.633556] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:18:42.293 [2024-12-06 18:16:52.750727] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:18:42.551 Running I/O for 5 seconds... 00:18:44.865 31227.00 IOPS, 121.98 MiB/s [2024-12-06T18:16:56.393Z] 31963.50 IOPS, 124.86 MiB/s [2024-12-06T18:16:57.330Z] 31046.67 IOPS, 121.28 MiB/s [2024-12-06T18:16:58.268Z] 30259.00 IOPS, 118.20 MiB/s [2024-12-06T18:16:58.268Z] 30248.60 IOPS, 118.16 MiB/s 00:18:47.692 Latency(us) 00:18:47.692 [2024-12-06T18:16:58.268Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:18:47.692 Job: xnvme_bdev (Core Mask 0x1, workload: randread, depth: 64, IO size: 4096) 00:18:47.692 xnvme_bdev : 5.00 30231.47 118.09 0.00 0.00 2110.73 690.89 23687.71 00:18:47.692 [2024-12-06T18:16:58.268Z] =================================================================================================================== 00:18:47.692 [2024-12-06T18:16:58.268Z] Total : 30231.47 118.09 0.00 0.00 2110.73 690.89 23687.71 00:18:49.071 18:16:59 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@15 -- # for io_pattern in "${io_pattern_ref[@]}" 00:18:49.071 18:16:59 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@17 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /dev/fd/62 -q 64 -w randwrite -t 5 -T xnvme_bdev -o 4096 00:18:49.071 18:16:59 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@17 -- # gen_conf 00:18:49.071 18:16:59 nvme_xnvme.xnvme_bdevperf -- dd/common.sh@31 -- # xtrace_disable 00:18:49.071 18:16:59 nvme_xnvme.xnvme_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:18:49.071 { 00:18:49.071 "subsystems": [ 00:18:49.071 { 00:18:49.071 "subsystem": "bdev", 00:18:49.071 "config": [ 00:18:49.071 { 00:18:49.071 "params": { 00:18:49.071 "io_mechanism": "io_uring_cmd", 00:18:49.071 "conserve_cpu": false, 00:18:49.071 "filename": "/dev/ng0n1", 00:18:49.071 "name": "xnvme_bdev" 00:18:49.071 }, 00:18:49.071 "method": "bdev_xnvme_create" 00:18:49.071 }, 00:18:49.071 { 00:18:49.071 "method": "bdev_wait_for_examine" 00:18:49.071 } 00:18:49.071 ] 00:18:49.071 } 00:18:49.071 ] 00:18:49.071 } 00:18:49.071 [2024-12-06 18:16:59.296976] Starting SPDK v25.01-pre git sha1 1148849d6 / DPDK 24.03.0 initialization... 00:18:49.072 [2024-12-06 18:16:59.297109] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid72459 ] 00:18:49.072 [2024-12-06 18:16:59.476730] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:18:49.072 [2024-12-06 18:16:59.592661] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:18:49.640 Running I/O for 5 seconds... 00:18:51.512 28352.00 IOPS, 110.75 MiB/s [2024-12-06T18:17:03.039Z] 27557.00 IOPS, 107.64 MiB/s [2024-12-06T18:17:03.975Z] 27651.33 IOPS, 108.01 MiB/s [2024-12-06T18:17:05.365Z] 27458.50 IOPS, 107.26 MiB/s 00:18:54.789 Latency(us) 00:18:54.789 [2024-12-06T18:17:05.365Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:18:54.789 Job: xnvme_bdev (Core Mask 0x1, workload: randwrite, depth: 64, IO size: 4096) 00:18:54.789 xnvme_bdev : 5.00 28299.23 110.54 0.00 0.00 2254.60 980.41 8001.18 00:18:54.789 [2024-12-06T18:17:05.365Z] =================================================================================================================== 00:18:54.789 [2024-12-06T18:17:05.365Z] Total : 28299.23 110.54 0.00 0.00 2254.60 980.41 8001.18 00:18:55.725 18:17:06 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@15 -- # for io_pattern in "${io_pattern_ref[@]}" 00:18:55.725 18:17:06 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@17 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /dev/fd/62 -q 64 -w unmap -t 5 -T xnvme_bdev -o 4096 00:18:55.725 18:17:06 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@17 -- # gen_conf 00:18:55.725 18:17:06 nvme_xnvme.xnvme_bdevperf -- dd/common.sh@31 -- # xtrace_disable 00:18:55.725 18:17:06 nvme_xnvme.xnvme_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:18:55.725 { 00:18:55.725 "subsystems": [ 00:18:55.725 { 00:18:55.725 "subsystem": "bdev", 00:18:55.725 "config": [ 00:18:55.725 { 00:18:55.725 "params": { 00:18:55.725 "io_mechanism": "io_uring_cmd", 00:18:55.725 "conserve_cpu": false, 00:18:55.725 "filename": "/dev/ng0n1", 00:18:55.725 "name": "xnvme_bdev" 00:18:55.725 }, 00:18:55.725 "method": "bdev_xnvme_create" 00:18:55.725 }, 00:18:55.725 { 00:18:55.725 "method": "bdev_wait_for_examine" 00:18:55.725 } 00:18:55.725 ] 00:18:55.725 } 00:18:55.725 ] 00:18:55.725 } 00:18:55.725 [2024-12-06 18:17:06.121701] Starting SPDK v25.01-pre git sha1 1148849d6 / DPDK 24.03.0 initialization... 00:18:55.725 [2024-12-06 18:17:06.121834] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid72541 ] 00:18:55.985 [2024-12-06 18:17:06.302786] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:18:55.985 [2024-12-06 18:17:06.414932] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:18:56.244 Running I/O for 5 seconds... 00:18:58.187 71488.00 IOPS, 279.25 MiB/s [2024-12-06T18:17:10.138Z] 71328.00 IOPS, 278.62 MiB/s [2024-12-06T18:17:11.075Z] 71381.33 IOPS, 278.83 MiB/s [2024-12-06T18:17:12.012Z] 71312.00 IOPS, 278.56 MiB/s [2024-12-06T18:17:12.012Z] 71398.40 IOPS, 278.90 MiB/s 00:19:01.436 Latency(us) 00:19:01.436 [2024-12-06T18:17:12.012Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:19:01.436 Job: xnvme_bdev (Core Mask 0x1, workload: unmap, depth: 64, IO size: 4096) 00:19:01.436 xnvme_bdev : 5.00 71385.33 278.85 0.00 0.00 893.82 661.28 3790.03 00:19:01.436 [2024-12-06T18:17:12.012Z] =================================================================================================================== 00:19:01.436 [2024-12-06T18:17:12.012Z] Total : 71385.33 278.85 0.00 0.00 893.82 661.28 3790.03 00:19:02.373 18:17:12 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@15 -- # for io_pattern in "${io_pattern_ref[@]}" 00:19:02.374 18:17:12 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@17 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /dev/fd/62 -q 64 -w write_zeroes -t 5 -T xnvme_bdev -o 4096 00:19:02.374 18:17:12 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@17 -- # gen_conf 00:19:02.374 18:17:12 nvme_xnvme.xnvme_bdevperf -- dd/common.sh@31 -- # xtrace_disable 00:19:02.374 18:17:12 nvme_xnvme.xnvme_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:19:02.374 { 00:19:02.374 "subsystems": [ 00:19:02.374 { 00:19:02.374 "subsystem": "bdev", 00:19:02.374 "config": [ 00:19:02.374 { 00:19:02.374 "params": { 00:19:02.374 "io_mechanism": "io_uring_cmd", 00:19:02.374 "conserve_cpu": false, 00:19:02.374 "filename": "/dev/ng0n1", 00:19:02.374 "name": "xnvme_bdev" 00:19:02.374 }, 00:19:02.374 "method": "bdev_xnvme_create" 00:19:02.374 }, 00:19:02.374 { 00:19:02.374 "method": "bdev_wait_for_examine" 00:19:02.374 } 00:19:02.374 ] 00:19:02.374 } 00:19:02.374 ] 00:19:02.374 } 00:19:02.374 [2024-12-06 18:17:12.945232] Starting SPDK v25.01-pre git sha1 1148849d6 / DPDK 24.03.0 initialization... 00:19:02.374 [2024-12-06 18:17:12.945381] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid72619 ] 00:19:02.632 [2024-12-06 18:17:13.126035] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:19:02.891 [2024-12-06 18:17:13.239552] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:19:03.151 Running I/O for 5 seconds... 00:19:05.025 39196.00 IOPS, 153.11 MiB/s [2024-12-06T18:17:16.976Z] 45518.50 IOPS, 177.81 MiB/s [2024-12-06T18:17:17.912Z] 49373.00 IOPS, 192.86 MiB/s [2024-12-06T18:17:18.847Z] 47272.50 IOPS, 184.66 MiB/s [2024-12-06T18:17:18.847Z] 46906.60 IOPS, 183.23 MiB/s 00:19:08.271 Latency(us) 00:19:08.271 [2024-12-06T18:17:18.847Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:19:08.271 Job: xnvme_bdev (Core Mask 0x1, workload: write_zeroes, depth: 64, IO size: 4096) 00:19:08.271 xnvme_bdev : 5.00 46884.91 183.14 0.00 0.00 1361.04 150.52 10580.51 00:19:08.271 [2024-12-06T18:17:18.847Z] =================================================================================================================== 00:19:08.271 [2024-12-06T18:17:18.847Z] Total : 46884.91 183.14 0.00 0.00 1361.04 150.52 10580.51 00:19:09.210 00:19:09.210 real 0m27.315s 00:19:09.210 user 0m13.749s 00:19:09.210 sys 0m13.156s 00:19:09.210 18:17:19 nvme_xnvme.xnvme_bdevperf -- common/autotest_common.sh@1130 -- # xtrace_disable 00:19:09.210 18:17:19 nvme_xnvme.xnvme_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:19:09.210 ************************************ 00:19:09.210 END TEST xnvme_bdevperf 00:19:09.210 ************************************ 00:19:09.210 18:17:19 nvme_xnvme -- xnvme/xnvme.sh@88 -- # run_test xnvme_fio_plugin xnvme_fio_plugin 00:19:09.210 18:17:19 nvme_xnvme -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:19:09.210 18:17:19 nvme_xnvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:19:09.210 18:17:19 nvme_xnvme -- common/autotest_common.sh@10 -- # set +x 00:19:09.210 ************************************ 00:19:09.210 START TEST xnvme_fio_plugin 00:19:09.210 ************************************ 00:19:09.210 18:17:19 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1129 -- # xnvme_fio_plugin 00:19:09.210 18:17:19 nvme_xnvme.xnvme_fio_plugin -- xnvme/xnvme.sh@27 -- # local io_pattern 00:19:09.210 18:17:19 nvme_xnvme.xnvme_fio_plugin -- xnvme/xnvme.sh@28 -- # local -n io_pattern_ref=io_uring_cmd_fio 00:19:09.210 18:17:19 nvme_xnvme.xnvme_fio_plugin -- xnvme/xnvme.sh@30 -- # for io_pattern in "${io_pattern_ref[@]}" 00:19:09.210 18:17:19 nvme_xnvme.xnvme_fio_plugin -- xnvme/xnvme.sh@32 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf=/dev/fd/62 --filename=xnvme_bdev --direct=1 --bs=4k --iodepth=64 --numjobs=1 --rw=randread --time_based --runtime=5 --thread=1 --name xnvme_bdev 00:19:09.210 18:17:19 nvme_xnvme.xnvme_fio_plugin -- xnvme/xnvme.sh@32 -- # gen_conf 00:19:09.210 18:17:19 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1360 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf=/dev/fd/62 --filename=xnvme_bdev --direct=1 --bs=4k --iodepth=64 --numjobs=1 --rw=randread --time_based --runtime=5 --thread=1 --name xnvme_bdev 00:19:09.210 18:17:19 nvme_xnvme.xnvme_fio_plugin -- dd/common.sh@31 -- # xtrace_disable 00:19:09.210 18:17:19 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1341 -- # local fio_dir=/usr/src/fio 00:19:09.210 18:17:19 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@10 -- # set +x 00:19:09.210 18:17:19 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1343 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:19:09.210 18:17:19 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1343 -- # local sanitizers 00:19:09.210 18:17:19 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1344 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:19:09.210 18:17:19 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1345 -- # shift 00:19:09.210 18:17:19 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1347 -- # local asan_lib= 00:19:09.210 18:17:19 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 00:19:09.210 18:17:19 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1349 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:19:09.210 18:17:19 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1349 -- # grep libasan 00:19:09.210 18:17:19 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1349 -- # awk '{print $3}' 00:19:09.469 18:17:19 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1349 -- # asan_lib=/usr/lib64/libasan.so.8 00:19:09.469 18:17:19 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1350 -- # [[ -n /usr/lib64/libasan.so.8 ]] 00:19:09.469 18:17:19 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1351 -- # break 00:19:09.469 18:17:19 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1356 -- # LD_PRELOAD='/usr/lib64/libasan.so.8 /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev' 00:19:09.469 18:17:19 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1356 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf=/dev/fd/62 --filename=xnvme_bdev --direct=1 --bs=4k --iodepth=64 --numjobs=1 --rw=randread --time_based --runtime=5 --thread=1 --name xnvme_bdev 00:19:09.469 { 00:19:09.469 "subsystems": [ 00:19:09.469 { 00:19:09.469 "subsystem": "bdev", 00:19:09.469 "config": [ 00:19:09.469 { 00:19:09.469 "params": { 00:19:09.469 "io_mechanism": "io_uring_cmd", 00:19:09.469 "conserve_cpu": false, 00:19:09.469 "filename": "/dev/ng0n1", 00:19:09.469 "name": "xnvme_bdev" 00:19:09.469 }, 00:19:09.469 "method": "bdev_xnvme_create" 00:19:09.469 }, 00:19:09.469 { 00:19:09.469 "method": "bdev_wait_for_examine" 00:19:09.469 } 00:19:09.469 ] 00:19:09.469 } 00:19:09.469 ] 00:19:09.469 } 00:19:09.469 xnvme_bdev: (g=0): rw=randread, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=64 00:19:09.469 fio-3.35 00:19:09.469 Starting 1 thread 00:19:16.036 00:19:16.036 xnvme_bdev: (groupid=0, jobs=1): err= 0: pid=72743: Fri Dec 6 18:17:25 2024 00:19:16.036 read: IOPS=27.5k, BW=107MiB/s (112MB/s)(537MiB/5002msec) 00:19:16.036 slat (nsec): min=4810, max=69337, avg=6646.35, stdev=2268.90 00:19:16.036 clat (usec): min=1512, max=5599, avg=2067.68, stdev=223.35 00:19:16.036 lat (usec): min=1518, max=5623, avg=2074.32, stdev=223.87 00:19:16.036 clat percentiles (usec): 00:19:16.036 | 1.00th=[ 1663], 5.00th=[ 1762], 10.00th=[ 1811], 20.00th=[ 1893], 00:19:16.036 | 30.00th=[ 1942], 40.00th=[ 2008], 50.00th=[ 2057], 60.00th=[ 2114], 00:19:16.036 | 70.00th=[ 2147], 80.00th=[ 2245], 90.00th=[ 2343], 95.00th=[ 2409], 00:19:16.036 | 99.00th=[ 2638], 99.50th=[ 2704], 99.90th=[ 3523], 99.95th=[ 4555], 00:19:16.036 | 99.99th=[ 5538] 00:19:16.036 bw ( KiB/s): min=104448, max=113948, per=99.58%, avg=109372.00, stdev=3907.10, samples=9 00:19:16.036 iops : min=26112, max=28487, avg=27343.00, stdev=976.77, samples=9 00:19:16.036 lat (msec) : 2=39.99%, 4=59.92%, 10=0.09% 00:19:16.036 cpu : usr=35.05%, sys=63.79%, ctx=7, majf=0, minf=762 00:19:16.036 IO depths : 1=1.6%, 2=3.1%, 4=6.2%, 8=12.5%, 16=25.0%, 32=50.0%, >=64=1.6% 00:19:16.036 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:19:16.036 complete : 0=0.0%, 4=98.5%, 8=0.0%, 16=0.0%, 32=0.0%, 64=1.5%, >=64=0.0% 00:19:16.036 issued rwts: total=137344,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:19:16.036 latency : target=0, window=0, percentile=100.00%, depth=64 00:19:16.036 00:19:16.036 Run status group 0 (all jobs): 00:19:16.036 READ: bw=107MiB/s (112MB/s), 107MiB/s-107MiB/s (112MB/s-112MB/s), io=537MiB (563MB), run=5002-5002msec 00:19:16.603 ----------------------------------------------------- 00:19:16.603 Suppressions used: 00:19:16.603 count bytes template 00:19:16.603 1 11 /usr/src/fio/parse.c 00:19:16.603 1 8 libtcmalloc_minimal.so 00:19:16.603 1 904 libcrypto.so 00:19:16.603 ----------------------------------------------------- 00:19:16.603 00:19:16.603 18:17:27 nvme_xnvme.xnvme_fio_plugin -- xnvme/xnvme.sh@30 -- # for io_pattern in "${io_pattern_ref[@]}" 00:19:16.603 18:17:27 nvme_xnvme.xnvme_fio_plugin -- xnvme/xnvme.sh@32 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf=/dev/fd/62 --filename=xnvme_bdev --direct=1 --bs=4k --iodepth=64 --numjobs=1 --rw=randwrite --time_based --runtime=5 --thread=1 --name xnvme_bdev 00:19:16.603 18:17:27 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1360 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf=/dev/fd/62 --filename=xnvme_bdev --direct=1 --bs=4k --iodepth=64 --numjobs=1 --rw=randwrite --time_based --runtime=5 --thread=1 --name xnvme_bdev 00:19:16.603 18:17:27 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1341 -- # local fio_dir=/usr/src/fio 00:19:16.603 18:17:27 nvme_xnvme.xnvme_fio_plugin -- xnvme/xnvme.sh@32 -- # gen_conf 00:19:16.603 18:17:27 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1343 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:19:16.603 18:17:27 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1343 -- # local sanitizers 00:19:16.603 18:17:27 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1344 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:19:16.603 18:17:27 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1345 -- # shift 00:19:16.603 18:17:27 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1347 -- # local asan_lib= 00:19:16.603 18:17:27 nvme_xnvme.xnvme_fio_plugin -- dd/common.sh@31 -- # xtrace_disable 00:19:16.603 18:17:27 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 00:19:16.603 18:17:27 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@10 -- # set +x 00:19:16.603 18:17:27 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1349 -- # awk '{print $3}' 00:19:16.603 18:17:27 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1349 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:19:16.603 18:17:27 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1349 -- # grep libasan 00:19:16.861 18:17:27 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1349 -- # asan_lib=/usr/lib64/libasan.so.8 00:19:16.861 18:17:27 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1350 -- # [[ -n /usr/lib64/libasan.so.8 ]] 00:19:16.861 18:17:27 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1351 -- # break 00:19:16.861 18:17:27 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1356 -- # LD_PRELOAD='/usr/lib64/libasan.so.8 /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev' 00:19:16.861 18:17:27 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1356 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf=/dev/fd/62 --filename=xnvme_bdev --direct=1 --bs=4k --iodepth=64 --numjobs=1 --rw=randwrite --time_based --runtime=5 --thread=1 --name xnvme_bdev 00:19:16.861 { 00:19:16.861 "subsystems": [ 00:19:16.861 { 00:19:16.862 "subsystem": "bdev", 00:19:16.862 "config": [ 00:19:16.862 { 00:19:16.862 "params": { 00:19:16.862 "io_mechanism": "io_uring_cmd", 00:19:16.862 "conserve_cpu": false, 00:19:16.862 "filename": "/dev/ng0n1", 00:19:16.862 "name": "xnvme_bdev" 00:19:16.862 }, 00:19:16.862 "method": "bdev_xnvme_create" 00:19:16.862 }, 00:19:16.862 { 00:19:16.862 "method": "bdev_wait_for_examine" 00:19:16.862 } 00:19:16.862 ] 00:19:16.862 } 00:19:16.862 ] 00:19:16.862 } 00:19:16.862 xnvme_bdev: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=64 00:19:16.862 fio-3.35 00:19:16.862 Starting 1 thread 00:19:23.503 00:19:23.503 xnvme_bdev: (groupid=0, jobs=1): err= 0: pid=72835: Fri Dec 6 18:17:33 2024 00:19:23.503 write: IOPS=29.9k, BW=117MiB/s (122MB/s)(584MiB/5001msec); 0 zone resets 00:19:23.503 slat (usec): min=4, max=969, avg= 6.12, stdev= 3.53 00:19:23.503 clat (usec): min=267, max=4879, avg=1901.63, stdev=234.61 00:19:23.503 lat (usec): min=272, max=4887, avg=1907.75, stdev=235.28 00:19:23.503 clat percentiles (usec): 00:19:23.503 | 1.00th=[ 1500], 5.00th=[ 1598], 10.00th=[ 1647], 20.00th=[ 1713], 00:19:23.503 | 30.00th=[ 1762], 40.00th=[ 1827], 50.00th=[ 1876], 60.00th=[ 1926], 00:19:23.503 | 70.00th=[ 1991], 80.00th=[ 2057], 90.00th=[ 2212], 95.00th=[ 2343], 00:19:23.503 | 99.00th=[ 2573], 99.50th=[ 2671], 99.90th=[ 3064], 99.95th=[ 3228], 00:19:23.503 | 99.99th=[ 4752] 00:19:23.503 bw ( KiB/s): min=113152, max=126976, per=100.00%, avg=120263.11, stdev=5305.10, samples=9 00:19:23.503 iops : min=28288, max=31744, avg=30065.78, stdev=1326.27, samples=9 00:19:23.503 lat (usec) : 500=0.01% 00:19:23.503 lat (msec) : 2=72.48%, 4=27.47%, 10=0.04% 00:19:23.503 cpu : usr=32.82%, sys=65.64%, ctx=25, majf=0, minf=763 00:19:23.503 IO depths : 1=1.6%, 2=3.1%, 4=6.3%, 8=12.5%, 16=25.0%, 32=50.0%, >=64=1.6% 00:19:23.503 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:19:23.503 complete : 0=0.0%, 4=98.5%, 8=0.1%, 16=0.0%, 32=0.0%, 64=1.5%, >=64=0.0% 00:19:23.503 issued rwts: total=0,149382,0,0 short=0,0,0,0 dropped=0,0,0,0 00:19:23.503 latency : target=0, window=0, percentile=100.00%, depth=64 00:19:23.503 00:19:23.503 Run status group 0 (all jobs): 00:19:23.503 WRITE: bw=117MiB/s (122MB/s), 117MiB/s-117MiB/s (122MB/s-122MB/s), io=584MiB (612MB), run=5001-5001msec 00:19:24.073 ----------------------------------------------------- 00:19:24.073 Suppressions used: 00:19:24.073 count bytes template 00:19:24.073 1 11 /usr/src/fio/parse.c 00:19:24.073 1 8 libtcmalloc_minimal.so 00:19:24.073 1 904 libcrypto.so 00:19:24.073 ----------------------------------------------------- 00:19:24.073 00:19:24.073 00:19:24.073 real 0m14.799s 00:19:24.073 user 0m7.152s 00:19:24.073 sys 0m7.257s 00:19:24.073 18:17:34 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1130 -- # xtrace_disable 00:19:24.073 18:17:34 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@10 -- # set +x 00:19:24.073 ************************************ 00:19:24.073 END TEST xnvme_fio_plugin 00:19:24.073 ************************************ 00:19:24.073 18:17:34 nvme_xnvme -- xnvme/xnvme.sh@82 -- # for cc in "${xnvme_conserve_cpu[@]}" 00:19:24.073 18:17:34 nvme_xnvme -- xnvme/xnvme.sh@83 -- # method_bdev_xnvme_create_0["conserve_cpu"]=true 00:19:24.073 18:17:34 nvme_xnvme -- xnvme/xnvme.sh@84 -- # conserve_cpu=true 00:19:24.073 18:17:34 nvme_xnvme -- xnvme/xnvme.sh@86 -- # run_test xnvme_rpc xnvme_rpc 00:19:24.073 18:17:34 nvme_xnvme -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:19:24.073 18:17:34 nvme_xnvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:19:24.073 18:17:34 nvme_xnvme -- common/autotest_common.sh@10 -- # set +x 00:19:24.073 ************************************ 00:19:24.073 START TEST xnvme_rpc 00:19:24.073 ************************************ 00:19:24.073 18:17:34 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@1129 -- # xnvme_rpc 00:19:24.073 18:17:34 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@48 -- # cc=() 00:19:24.073 18:17:34 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@48 -- # local -A cc 00:19:24.073 18:17:34 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@50 -- # cc["false"]= 00:19:24.073 18:17:34 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@50 -- # cc["true"]=-c 00:19:24.073 18:17:34 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@53 -- # spdk_tgt=72920 00:19:24.073 18:17:34 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@52 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:19:24.073 18:17:34 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@54 -- # waitforlisten 72920 00:19:24.073 18:17:34 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@835 -- # '[' -z 72920 ']' 00:19:24.073 18:17:34 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:19:24.073 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:19:24.073 18:17:34 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@840 -- # local max_retries=100 00:19:24.073 18:17:34 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:19:24.073 18:17:34 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@844 -- # xtrace_disable 00:19:24.073 18:17:34 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@10 -- # set +x 00:19:24.332 [2024-12-06 18:17:34.733112] Starting SPDK v25.01-pre git sha1 1148849d6 / DPDK 24.03.0 initialization... 00:19:24.332 [2024-12-06 18:17:34.733471] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid72920 ] 00:19:24.590 [2024-12-06 18:17:34.909243] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:19:24.590 [2024-12-06 18:17:35.025726] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:19:25.524 18:17:35 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:19:25.524 18:17:35 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@868 -- # return 0 00:19:25.524 18:17:35 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@56 -- # rpc_cmd bdev_xnvme_create /dev/ng0n1 xnvme_bdev io_uring_cmd -c 00:19:25.524 18:17:35 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:25.524 18:17:35 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@10 -- # set +x 00:19:25.524 xnvme_bdev 00:19:25.524 18:17:35 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:25.524 18:17:35 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@62 -- # rpc_xnvme name 00:19:25.524 18:17:35 nvme_xnvme.xnvme_rpc -- xnvme/common.sh@65 -- # rpc_cmd framework_get_config bdev 00:19:25.524 18:17:35 nvme_xnvme.xnvme_rpc -- xnvme/common.sh@66 -- # jq -r '.[] | select(.method == "bdev_xnvme_create").params.name' 00:19:25.524 18:17:35 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:25.524 18:17:35 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@10 -- # set +x 00:19:25.524 18:17:35 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:25.524 18:17:35 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@62 -- # [[ xnvme_bdev == \x\n\v\m\e\_\b\d\e\v ]] 00:19:25.524 18:17:35 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@63 -- # rpc_xnvme filename 00:19:25.524 18:17:35 nvme_xnvme.xnvme_rpc -- xnvme/common.sh@66 -- # jq -r '.[] | select(.method == "bdev_xnvme_create").params.filename' 00:19:25.524 18:17:35 nvme_xnvme.xnvme_rpc -- xnvme/common.sh@65 -- # rpc_cmd framework_get_config bdev 00:19:25.524 18:17:35 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:25.524 18:17:35 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@10 -- # set +x 00:19:25.524 18:17:35 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:25.524 18:17:35 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@63 -- # [[ /dev/ng0n1 == \/\d\e\v\/\n\g\0\n\1 ]] 00:19:25.524 18:17:36 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@64 -- # rpc_xnvme io_mechanism 00:19:25.524 18:17:36 nvme_xnvme.xnvme_rpc -- xnvme/common.sh@65 -- # rpc_cmd framework_get_config bdev 00:19:25.524 18:17:36 nvme_xnvme.xnvme_rpc -- xnvme/common.sh@66 -- # jq -r '.[] | select(.method == "bdev_xnvme_create").params.io_mechanism' 00:19:25.524 18:17:36 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:25.524 18:17:36 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@10 -- # set +x 00:19:25.524 18:17:36 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:25.524 18:17:36 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@64 -- # [[ io_uring_cmd == \i\o\_\u\r\i\n\g\_\c\m\d ]] 00:19:25.524 18:17:36 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@65 -- # rpc_xnvme conserve_cpu 00:19:25.524 18:17:36 nvme_xnvme.xnvme_rpc -- xnvme/common.sh@65 -- # rpc_cmd framework_get_config bdev 00:19:25.524 18:17:36 nvme_xnvme.xnvme_rpc -- xnvme/common.sh@66 -- # jq -r '.[] | select(.method == "bdev_xnvme_create").params.conserve_cpu' 00:19:25.524 18:17:36 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:25.524 18:17:36 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@10 -- # set +x 00:19:25.524 18:17:36 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:25.524 18:17:36 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@65 -- # [[ true == \t\r\u\e ]] 00:19:25.524 18:17:36 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@67 -- # rpc_cmd bdev_xnvme_delete xnvme_bdev 00:19:25.524 18:17:36 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:25.524 18:17:36 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@10 -- # set +x 00:19:25.783 18:17:36 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:25.783 18:17:36 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@70 -- # killprocess 72920 00:19:25.783 18:17:36 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@954 -- # '[' -z 72920 ']' 00:19:25.783 18:17:36 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@958 -- # kill -0 72920 00:19:25.783 18:17:36 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@959 -- # uname 00:19:25.783 18:17:36 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:19:25.783 18:17:36 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 72920 00:19:25.783 killing process with pid 72920 00:19:25.783 18:17:36 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:19:25.783 18:17:36 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:19:25.783 18:17:36 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@972 -- # echo 'killing process with pid 72920' 00:19:25.783 18:17:36 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@973 -- # kill 72920 00:19:25.783 18:17:36 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@978 -- # wait 72920 00:19:28.317 00:19:28.317 real 0m3.917s 00:19:28.317 user 0m3.983s 00:19:28.317 sys 0m0.531s 00:19:28.317 18:17:38 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@1130 -- # xtrace_disable 00:19:28.317 ************************************ 00:19:28.317 END TEST xnvme_rpc 00:19:28.317 ************************************ 00:19:28.317 18:17:38 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@10 -- # set +x 00:19:28.317 18:17:38 nvme_xnvme -- xnvme/xnvme.sh@87 -- # run_test xnvme_bdevperf xnvme_bdevperf 00:19:28.317 18:17:38 nvme_xnvme -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:19:28.317 18:17:38 nvme_xnvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:19:28.317 18:17:38 nvme_xnvme -- common/autotest_common.sh@10 -- # set +x 00:19:28.317 ************************************ 00:19:28.317 START TEST xnvme_bdevperf 00:19:28.317 ************************************ 00:19:28.317 18:17:38 nvme_xnvme.xnvme_bdevperf -- common/autotest_common.sh@1129 -- # xnvme_bdevperf 00:19:28.317 18:17:38 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@12 -- # local io_pattern 00:19:28.317 18:17:38 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@13 -- # local -n io_pattern_ref=io_uring_cmd 00:19:28.317 18:17:38 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@15 -- # for io_pattern in "${io_pattern_ref[@]}" 00:19:28.317 18:17:38 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@17 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /dev/fd/62 -q 64 -w randread -t 5 -T xnvme_bdev -o 4096 00:19:28.317 18:17:38 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@17 -- # gen_conf 00:19:28.317 18:17:38 nvme_xnvme.xnvme_bdevperf -- dd/common.sh@31 -- # xtrace_disable 00:19:28.317 18:17:38 nvme_xnvme.xnvme_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:19:28.317 { 00:19:28.317 "subsystems": [ 00:19:28.317 { 00:19:28.317 "subsystem": "bdev", 00:19:28.317 "config": [ 00:19:28.317 { 00:19:28.317 "params": { 00:19:28.317 "io_mechanism": "io_uring_cmd", 00:19:28.317 "conserve_cpu": true, 00:19:28.317 "filename": "/dev/ng0n1", 00:19:28.317 "name": "xnvme_bdev" 00:19:28.317 }, 00:19:28.317 "method": "bdev_xnvme_create" 00:19:28.317 }, 00:19:28.317 { 00:19:28.317 "method": "bdev_wait_for_examine" 00:19:28.317 } 00:19:28.317 ] 00:19:28.317 } 00:19:28.317 ] 00:19:28.317 } 00:19:28.317 [2024-12-06 18:17:38.711145] Starting SPDK v25.01-pre git sha1 1148849d6 / DPDK 24.03.0 initialization... 00:19:28.317 [2024-12-06 18:17:38.711287] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid73004 ] 00:19:28.317 [2024-12-06 18:17:38.891064] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:19:28.577 [2024-12-06 18:17:38.999896] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:19:28.836 Running I/O for 5 seconds... 00:19:30.783 36864.00 IOPS, 144.00 MiB/s [2024-12-06T18:17:42.739Z] 35872.00 IOPS, 140.12 MiB/s [2024-12-06T18:17:43.676Z] 34965.33 IOPS, 136.58 MiB/s [2024-12-06T18:17:44.613Z] 34464.00 IOPS, 134.62 MiB/s 00:19:34.037 Latency(us) 00:19:34.037 [2024-12-06T18:17:44.613Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:19:34.037 Job: xnvme_bdev (Core Mask 0x1, workload: randread, depth: 64, IO size: 4096) 00:19:34.037 xnvme_bdev : 5.00 34989.07 136.68 0.00 0.00 1823.87 829.07 7527.43 00:19:34.037 [2024-12-06T18:17:44.613Z] =================================================================================================================== 00:19:34.037 [2024-12-06T18:17:44.613Z] Total : 34989.07 136.68 0.00 0.00 1823.87 829.07 7527.43 00:19:34.971 18:17:45 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@15 -- # for io_pattern in "${io_pattern_ref[@]}" 00:19:34.971 18:17:45 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@17 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /dev/fd/62 -q 64 -w randwrite -t 5 -T xnvme_bdev -o 4096 00:19:34.971 18:17:45 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@17 -- # gen_conf 00:19:34.971 18:17:45 nvme_xnvme.xnvme_bdevperf -- dd/common.sh@31 -- # xtrace_disable 00:19:34.971 18:17:45 nvme_xnvme.xnvme_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:19:34.971 { 00:19:34.971 "subsystems": [ 00:19:34.971 { 00:19:34.971 "subsystem": "bdev", 00:19:34.971 "config": [ 00:19:34.971 { 00:19:34.971 "params": { 00:19:34.971 "io_mechanism": "io_uring_cmd", 00:19:34.971 "conserve_cpu": true, 00:19:34.971 "filename": "/dev/ng0n1", 00:19:34.971 "name": "xnvme_bdev" 00:19:34.971 }, 00:19:34.971 "method": "bdev_xnvme_create" 00:19:34.971 }, 00:19:34.971 { 00:19:34.971 "method": "bdev_wait_for_examine" 00:19:34.971 } 00:19:34.971 ] 00:19:34.971 } 00:19:34.971 ] 00:19:34.971 } 00:19:35.230 [2024-12-06 18:17:45.562057] Starting SPDK v25.01-pre git sha1 1148849d6 / DPDK 24.03.0 initialization... 00:19:35.230 [2024-12-06 18:17:45.562175] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid73085 ] 00:19:35.230 [2024-12-06 18:17:45.736680] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:19:35.488 [2024-12-06 18:17:45.856852] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:19:35.747 Running I/O for 5 seconds... 00:19:38.065 30353.00 IOPS, 118.57 MiB/s [2024-12-06T18:17:49.575Z] 29672.50 IOPS, 115.91 MiB/s [2024-12-06T18:17:50.509Z] 30213.67 IOPS, 118.02 MiB/s [2024-12-06T18:17:51.443Z] 30548.25 IOPS, 119.33 MiB/s [2024-12-06T18:17:51.443Z] 30045.00 IOPS, 117.36 MiB/s 00:19:40.867 Latency(us) 00:19:40.867 [2024-12-06T18:17:51.443Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:19:40.867 Job: xnvme_bdev (Core Mask 0x1, workload: randwrite, depth: 64, IO size: 4096) 00:19:40.867 xnvme_bdev : 5.00 30030.75 117.31 0.00 0.00 2124.49 960.67 8369.66 00:19:40.867 [2024-12-06T18:17:51.443Z] =================================================================================================================== 00:19:40.867 [2024-12-06T18:17:51.443Z] Total : 30030.75 117.31 0.00 0.00 2124.49 960.67 8369.66 00:19:41.804 18:17:52 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@15 -- # for io_pattern in "${io_pattern_ref[@]}" 00:19:41.804 18:17:52 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@17 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /dev/fd/62 -q 64 -w unmap -t 5 -T xnvme_bdev -o 4096 00:19:41.804 18:17:52 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@17 -- # gen_conf 00:19:41.804 18:17:52 nvme_xnvme.xnvme_bdevperf -- dd/common.sh@31 -- # xtrace_disable 00:19:41.804 18:17:52 nvme_xnvme.xnvme_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:19:41.804 { 00:19:41.804 "subsystems": [ 00:19:41.804 { 00:19:41.804 "subsystem": "bdev", 00:19:41.804 "config": [ 00:19:41.804 { 00:19:41.804 "params": { 00:19:41.804 "io_mechanism": "io_uring_cmd", 00:19:41.804 "conserve_cpu": true, 00:19:41.804 "filename": "/dev/ng0n1", 00:19:41.804 "name": "xnvme_bdev" 00:19:41.804 }, 00:19:41.804 "method": "bdev_xnvme_create" 00:19:41.804 }, 00:19:41.804 { 00:19:41.804 "method": "bdev_wait_for_examine" 00:19:41.804 } 00:19:41.804 ] 00:19:41.804 } 00:19:41.804 ] 00:19:41.804 } 00:19:42.062 [2024-12-06 18:17:52.402408] Starting SPDK v25.01-pre git sha1 1148849d6 / DPDK 24.03.0 initialization... 00:19:42.062 [2024-12-06 18:17:52.402524] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid73160 ] 00:19:42.062 [2024-12-06 18:17:52.585440] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:19:42.321 [2024-12-06 18:17:52.701754] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:19:42.580 Running I/O for 5 seconds... 00:19:44.893 70784.00 IOPS, 276.50 MiB/s [2024-12-06T18:17:56.400Z] 70912.00 IOPS, 277.00 MiB/s [2024-12-06T18:17:57.336Z] 71018.67 IOPS, 277.42 MiB/s [2024-12-06T18:17:58.272Z] 70896.00 IOPS, 276.94 MiB/s [2024-12-06T18:17:58.272Z] 70912.00 IOPS, 277.00 MiB/s 00:19:47.696 Latency(us) 00:19:47.696 [2024-12-06T18:17:58.272Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:19:47.696 Job: xnvme_bdev (Core Mask 0x1, workload: unmap, depth: 64, IO size: 4096) 00:19:47.696 xnvme_bdev : 5.00 70900.33 276.95 0.00 0.00 899.96 641.54 2513.53 00:19:47.696 [2024-12-06T18:17:58.272Z] =================================================================================================================== 00:19:47.696 [2024-12-06T18:17:58.272Z] Total : 70900.33 276.95 0.00 0.00 899.96 641.54 2513.53 00:19:48.631 18:17:59 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@15 -- # for io_pattern in "${io_pattern_ref[@]}" 00:19:48.632 18:17:59 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@17 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /dev/fd/62 -q 64 -w write_zeroes -t 5 -T xnvme_bdev -o 4096 00:19:48.632 18:17:59 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@17 -- # gen_conf 00:19:48.632 18:17:59 nvme_xnvme.xnvme_bdevperf -- dd/common.sh@31 -- # xtrace_disable 00:19:48.632 18:17:59 nvme_xnvme.xnvme_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:19:48.632 { 00:19:48.632 "subsystems": [ 00:19:48.632 { 00:19:48.632 "subsystem": "bdev", 00:19:48.632 "config": [ 00:19:48.632 { 00:19:48.632 "params": { 00:19:48.632 "io_mechanism": "io_uring_cmd", 00:19:48.632 "conserve_cpu": true, 00:19:48.632 "filename": "/dev/ng0n1", 00:19:48.632 "name": "xnvme_bdev" 00:19:48.632 }, 00:19:48.632 "method": "bdev_xnvme_create" 00:19:48.632 }, 00:19:48.632 { 00:19:48.632 "method": "bdev_wait_for_examine" 00:19:48.632 } 00:19:48.632 ] 00:19:48.632 } 00:19:48.632 ] 00:19:48.632 } 00:19:48.890 [2024-12-06 18:17:59.231236] Starting SPDK v25.01-pre git sha1 1148849d6 / DPDK 24.03.0 initialization... 00:19:48.890 [2024-12-06 18:17:59.231387] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid73240 ] 00:19:48.890 [2024-12-06 18:17:59.410788] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:19:49.149 [2024-12-06 18:17:59.522958] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:19:49.408 Running I/O for 5 seconds... 00:19:51.713 59684.00 IOPS, 233.14 MiB/s [2024-12-06T18:18:03.223Z] 60698.50 IOPS, 237.10 MiB/s [2024-12-06T18:18:04.165Z] 56355.33 IOPS, 220.14 MiB/s [2024-12-06T18:18:05.112Z] 55475.25 IOPS, 216.70 MiB/s [2024-12-06T18:18:05.112Z] 53167.60 IOPS, 207.69 MiB/s 00:19:54.536 Latency(us) 00:19:54.536 [2024-12-06T18:18:05.112Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:19:54.536 Job: xnvme_bdev (Core Mask 0x1, workload: write_zeroes, depth: 64, IO size: 4096) 00:19:54.536 xnvme_bdev : 5.00 53132.30 207.55 0.00 0.00 1199.77 81.02 28635.81 00:19:54.536 [2024-12-06T18:18:05.113Z] =================================================================================================================== 00:19:54.537 [2024-12-06T18:18:05.113Z] Total : 53132.30 207.55 0.00 0.00 1199.77 81.02 28635.81 00:19:55.472 00:19:55.472 real 0m27.347s 00:19:55.472 user 0m17.230s 00:19:55.472 sys 0m8.606s 00:19:55.472 18:18:05 nvme_xnvme.xnvme_bdevperf -- common/autotest_common.sh@1130 -- # xtrace_disable 00:19:55.472 ************************************ 00:19:55.472 END TEST xnvme_bdevperf 00:19:55.472 ************************************ 00:19:55.472 18:18:05 nvme_xnvme.xnvme_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:19:55.472 18:18:06 nvme_xnvme -- xnvme/xnvme.sh@88 -- # run_test xnvme_fio_plugin xnvme_fio_plugin 00:19:55.472 18:18:06 nvme_xnvme -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:19:55.472 18:18:06 nvme_xnvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:19:55.472 18:18:06 nvme_xnvme -- common/autotest_common.sh@10 -- # set +x 00:19:55.472 ************************************ 00:19:55.472 START TEST xnvme_fio_plugin 00:19:55.472 ************************************ 00:19:55.472 18:18:06 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1129 -- # xnvme_fio_plugin 00:19:55.472 18:18:06 nvme_xnvme.xnvme_fio_plugin -- xnvme/xnvme.sh@27 -- # local io_pattern 00:19:55.472 18:18:06 nvme_xnvme.xnvme_fio_plugin -- xnvme/xnvme.sh@28 -- # local -n io_pattern_ref=io_uring_cmd_fio 00:19:55.472 18:18:06 nvme_xnvme.xnvme_fio_plugin -- xnvme/xnvme.sh@30 -- # for io_pattern in "${io_pattern_ref[@]}" 00:19:55.472 18:18:06 nvme_xnvme.xnvme_fio_plugin -- xnvme/xnvme.sh@32 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf=/dev/fd/62 --filename=xnvme_bdev --direct=1 --bs=4k --iodepth=64 --numjobs=1 --rw=randread --time_based --runtime=5 --thread=1 --name xnvme_bdev 00:19:55.472 18:18:06 nvme_xnvme.xnvme_fio_plugin -- xnvme/xnvme.sh@32 -- # gen_conf 00:19:55.472 18:18:06 nvme_xnvme.xnvme_fio_plugin -- dd/common.sh@31 -- # xtrace_disable 00:19:55.472 18:18:06 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1360 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf=/dev/fd/62 --filename=xnvme_bdev --direct=1 --bs=4k --iodepth=64 --numjobs=1 --rw=randread --time_based --runtime=5 --thread=1 --name xnvme_bdev 00:19:55.472 18:18:06 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1341 -- # local fio_dir=/usr/src/fio 00:19:55.472 18:18:06 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1343 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:19:55.472 18:18:06 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@10 -- # set +x 00:19:55.472 18:18:06 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1343 -- # local sanitizers 00:19:55.472 18:18:06 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1344 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:19:55.472 18:18:06 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1345 -- # shift 00:19:55.472 18:18:06 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1347 -- # local asan_lib= 00:19:55.472 18:18:06 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 00:19:55.472 18:18:06 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1349 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:19:55.472 18:18:06 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1349 -- # grep libasan 00:19:55.472 18:18:06 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1349 -- # awk '{print $3}' 00:19:55.731 18:18:06 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1349 -- # asan_lib=/usr/lib64/libasan.so.8 00:19:55.731 18:18:06 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1350 -- # [[ -n /usr/lib64/libasan.so.8 ]] 00:19:55.731 18:18:06 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1351 -- # break 00:19:55.731 { 00:19:55.731 "subsystems": [ 00:19:55.731 { 00:19:55.731 "subsystem": "bdev", 00:19:55.731 "config": [ 00:19:55.731 { 00:19:55.731 "params": { 00:19:55.731 "io_mechanism": "io_uring_cmd", 00:19:55.731 "conserve_cpu": true, 00:19:55.731 "filename": "/dev/ng0n1", 00:19:55.731 "name": "xnvme_bdev" 00:19:55.731 }, 00:19:55.731 "method": "bdev_xnvme_create" 00:19:55.731 }, 00:19:55.731 { 00:19:55.731 "method": "bdev_wait_for_examine" 00:19:55.731 } 00:19:55.731 ] 00:19:55.731 } 00:19:55.731 ] 00:19:55.731 } 00:19:55.731 18:18:06 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1356 -- # LD_PRELOAD='/usr/lib64/libasan.so.8 /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev' 00:19:55.731 18:18:06 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1356 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf=/dev/fd/62 --filename=xnvme_bdev --direct=1 --bs=4k --iodepth=64 --numjobs=1 --rw=randread --time_based --runtime=5 --thread=1 --name xnvme_bdev 00:19:55.731 xnvme_bdev: (g=0): rw=randread, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=64 00:19:55.731 fio-3.35 00:19:55.731 Starting 1 thread 00:20:02.298 00:20:02.298 xnvme_bdev: (groupid=0, jobs=1): err= 0: pid=73360: Fri Dec 6 18:18:12 2024 00:20:02.298 read: IOPS=30.5k, BW=119MiB/s (125MB/s)(595MiB/5002msec) 00:20:02.298 slat (usec): min=2, max=114, avg= 6.09, stdev= 2.57 00:20:02.298 clat (usec): min=976, max=6211, avg=1859.32, stdev=356.65 00:20:02.298 lat (usec): min=980, max=6239, avg=1865.41, stdev=357.97 00:20:02.298 clat percentiles (usec): 00:20:02.298 | 1.00th=[ 1139], 5.00th=[ 1319], 10.00th=[ 1450], 20.00th=[ 1565], 00:20:02.298 | 30.00th=[ 1663], 40.00th=[ 1745], 50.00th=[ 1827], 60.00th=[ 1909], 00:20:02.298 | 70.00th=[ 2024], 80.00th=[ 2147], 90.00th=[ 2343], 95.00th=[ 2474], 00:20:02.298 | 99.00th=[ 2671], 99.50th=[ 2737], 99.90th=[ 3163], 99.95th=[ 3589], 00:20:02.298 | 99.99th=[ 6063] 00:20:02.298 bw ( KiB/s): min=108032, max=142336, per=100.00%, avg=122709.33, stdev=10687.81, samples=9 00:20:02.298 iops : min=27008, max=35584, avg=30677.33, stdev=2671.95, samples=9 00:20:02.298 lat (usec) : 1000=0.01% 00:20:02.298 lat (msec) : 2=68.63%, 4=31.31%, 10=0.04% 00:20:02.298 cpu : usr=51.05%, sys=46.13%, ctx=7, majf=0, minf=762 00:20:02.298 IO depths : 1=1.6%, 2=3.1%, 4=6.2%, 8=12.5%, 16=25.0%, 32=50.0%, >=64=1.6% 00:20:02.298 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:20:02.298 complete : 0=0.0%, 4=98.5%, 8=0.0%, 16=0.0%, 32=0.0%, 64=1.5%, >=64=0.0% 00:20:02.298 issued rwts: total=152320,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:20:02.298 latency : target=0, window=0, percentile=100.00%, depth=64 00:20:02.298 00:20:02.298 Run status group 0 (all jobs): 00:20:02.298 READ: bw=119MiB/s (125MB/s), 119MiB/s-119MiB/s (125MB/s-125MB/s), io=595MiB (624MB), run=5002-5002msec 00:20:02.866 ----------------------------------------------------- 00:20:02.866 Suppressions used: 00:20:02.866 count bytes template 00:20:02.866 1 11 /usr/src/fio/parse.c 00:20:02.866 1 8 libtcmalloc_minimal.so 00:20:02.866 1 904 libcrypto.so 00:20:02.866 ----------------------------------------------------- 00:20:02.866 00:20:02.866 18:18:13 nvme_xnvme.xnvme_fio_plugin -- xnvme/xnvme.sh@30 -- # for io_pattern in "${io_pattern_ref[@]}" 00:20:02.866 18:18:13 nvme_xnvme.xnvme_fio_plugin -- xnvme/xnvme.sh@32 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf=/dev/fd/62 --filename=xnvme_bdev --direct=1 --bs=4k --iodepth=64 --numjobs=1 --rw=randwrite --time_based --runtime=5 --thread=1 --name xnvme_bdev 00:20:02.866 18:18:13 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1360 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf=/dev/fd/62 --filename=xnvme_bdev --direct=1 --bs=4k --iodepth=64 --numjobs=1 --rw=randwrite --time_based --runtime=5 --thread=1 --name xnvme_bdev 00:20:02.866 18:18:13 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1341 -- # local fio_dir=/usr/src/fio 00:20:02.866 18:18:13 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1343 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:20:02.866 18:18:13 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1343 -- # local sanitizers 00:20:02.866 18:18:13 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1344 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:20:02.866 18:18:13 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1345 -- # shift 00:20:02.866 18:18:13 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1347 -- # local asan_lib= 00:20:02.866 18:18:13 nvme_xnvme.xnvme_fio_plugin -- xnvme/xnvme.sh@32 -- # gen_conf 00:20:02.866 18:18:13 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 00:20:02.866 18:18:13 nvme_xnvme.xnvme_fio_plugin -- dd/common.sh@31 -- # xtrace_disable 00:20:02.866 18:18:13 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@10 -- # set +x 00:20:02.866 18:18:13 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1349 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:20:02.866 18:18:13 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1349 -- # grep libasan 00:20:02.866 18:18:13 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1349 -- # awk '{print $3}' 00:20:02.866 18:18:13 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1349 -- # asan_lib=/usr/lib64/libasan.so.8 00:20:02.866 18:18:13 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1350 -- # [[ -n /usr/lib64/libasan.so.8 ]] 00:20:02.866 18:18:13 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1351 -- # break 00:20:02.866 18:18:13 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1356 -- # LD_PRELOAD='/usr/lib64/libasan.so.8 /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev' 00:20:02.866 18:18:13 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1356 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf=/dev/fd/62 --filename=xnvme_bdev --direct=1 --bs=4k --iodepth=64 --numjobs=1 --rw=randwrite --time_based --runtime=5 --thread=1 --name xnvme_bdev 00:20:02.866 { 00:20:02.866 "subsystems": [ 00:20:02.866 { 00:20:02.866 "subsystem": "bdev", 00:20:02.866 "config": [ 00:20:02.866 { 00:20:02.866 "params": { 00:20:02.866 "io_mechanism": "io_uring_cmd", 00:20:02.866 "conserve_cpu": true, 00:20:02.866 "filename": "/dev/ng0n1", 00:20:02.866 "name": "xnvme_bdev" 00:20:02.866 }, 00:20:02.866 "method": "bdev_xnvme_create" 00:20:02.866 }, 00:20:02.866 { 00:20:02.866 "method": "bdev_wait_for_examine" 00:20:02.866 } 00:20:02.866 ] 00:20:02.866 } 00:20:02.866 ] 00:20:02.866 } 00:20:03.125 xnvme_bdev: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=64 00:20:03.125 fio-3.35 00:20:03.125 Starting 1 thread 00:20:09.704 00:20:09.704 xnvme_bdev: (groupid=0, jobs=1): err= 0: pid=73460: Fri Dec 6 18:18:19 2024 00:20:09.704 write: IOPS=32.0k, BW=125MiB/s (131MB/s)(625MiB/5001msec); 0 zone resets 00:20:09.704 slat (usec): min=2, max=610, avg= 5.90, stdev= 5.13 00:20:09.704 clat (usec): min=76, max=7364, avg=1771.76, stdev=439.91 00:20:09.704 lat (usec): min=80, max=7369, avg=1777.66, stdev=440.92 00:20:09.704 clat percentiles (usec): 00:20:09.704 | 1.00th=[ 963], 5.00th=[ 1270], 10.00th=[ 1369], 20.00th=[ 1467], 00:20:09.704 | 30.00th=[ 1549], 40.00th=[ 1614], 50.00th=[ 1680], 60.00th=[ 1778], 00:20:09.704 | 70.00th=[ 1893], 80.00th=[ 2073], 90.00th=[ 2311], 95.00th=[ 2507], 00:20:09.704 | 99.00th=[ 3064], 99.50th=[ 3785], 99.90th=[ 4948], 99.95th=[ 5735], 00:20:09.704 | 99.99th=[ 6652] 00:20:09.704 bw ( KiB/s): min=103217, max=150016, per=100.00%, avg=128517.44, stdev=14270.79, samples=9 00:20:09.704 iops : min=25804, max=37504, avg=32129.33, stdev=3567.75, samples=9 00:20:09.704 lat (usec) : 100=0.01%, 250=0.14%, 500=0.29%, 750=0.25%, 1000=0.39% 00:20:09.704 lat (msec) : 2=75.04%, 4=23.50%, 10=0.38% 00:20:09.704 cpu : usr=50.10%, sys=45.74%, ctx=14, majf=0, minf=763 00:20:09.704 IO depths : 1=1.5%, 2=3.0%, 4=6.0%, 8=12.0%, 16=24.4%, 32=51.3%, >=64=1.8% 00:20:09.704 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:20:09.704 complete : 0=0.0%, 4=98.3%, 8=0.1%, 16=0.1%, 32=0.1%, 64=1.5%, >=64=0.0% 00:20:09.704 issued rwts: total=0,160124,0,0 short=0,0,0,0 dropped=0,0,0,0 00:20:09.704 latency : target=0, window=0, percentile=100.00%, depth=64 00:20:09.704 00:20:09.704 Run status group 0 (all jobs): 00:20:09.704 WRITE: bw=125MiB/s (131MB/s), 125MiB/s-125MiB/s (131MB/s-131MB/s), io=625MiB (656MB), run=5001-5001msec 00:20:10.271 ----------------------------------------------------- 00:20:10.271 Suppressions used: 00:20:10.271 count bytes template 00:20:10.271 1 11 /usr/src/fio/parse.c 00:20:10.271 1 8 libtcmalloc_minimal.so 00:20:10.271 1 904 libcrypto.so 00:20:10.271 ----------------------------------------------------- 00:20:10.271 00:20:10.271 00:20:10.271 real 0m14.723s 00:20:10.271 user 0m8.807s 00:20:10.271 sys 0m5.310s 00:20:10.271 18:18:20 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1130 -- # xtrace_disable 00:20:10.271 ************************************ 00:20:10.271 END TEST xnvme_fio_plugin 00:20:10.271 ************************************ 00:20:10.271 18:18:20 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@10 -- # set +x 00:20:10.271 18:18:20 nvme_xnvme -- xnvme/xnvme.sh@1 -- # killprocess 72920 00:20:10.271 18:18:20 nvme_xnvme -- common/autotest_common.sh@954 -- # '[' -z 72920 ']' 00:20:10.271 18:18:20 nvme_xnvme -- common/autotest_common.sh@958 -- # kill -0 72920 00:20:10.271 Process with pid 72920 is not found 00:20:10.271 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 958: kill: (72920) - No such process 00:20:10.271 18:18:20 nvme_xnvme -- common/autotest_common.sh@981 -- # echo 'Process with pid 72920 is not found' 00:20:10.271 ************************************ 00:20:10.271 END TEST nvme_xnvme 00:20:10.271 ************************************ 00:20:10.271 00:20:10.271 real 3m50.893s 00:20:10.271 user 2m5.046s 00:20:10.271 sys 1m29.176s 00:20:10.271 18:18:20 nvme_xnvme -- common/autotest_common.sh@1130 -- # xtrace_disable 00:20:10.271 18:18:20 nvme_xnvme -- common/autotest_common.sh@10 -- # set +x 00:20:10.530 18:18:20 -- spdk/autotest.sh@245 -- # run_test blockdev_xnvme /home/vagrant/spdk_repo/spdk/test/bdev/blockdev.sh xnvme 00:20:10.530 18:18:20 -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:20:10.530 18:18:20 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:20:10.530 18:18:20 -- common/autotest_common.sh@10 -- # set +x 00:20:10.530 ************************************ 00:20:10.530 START TEST blockdev_xnvme 00:20:10.530 ************************************ 00:20:10.530 18:18:20 blockdev_xnvme -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/bdev/blockdev.sh xnvme 00:20:10.530 * Looking for test storage... 00:20:10.530 * Found test storage at /home/vagrant/spdk_repo/spdk/test/bdev 00:20:10.530 18:18:21 blockdev_xnvme -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:20:10.530 18:18:21 blockdev_xnvme -- common/autotest_common.sh@1711 -- # lcov --version 00:20:10.530 18:18:21 blockdev_xnvme -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:20:10.530 18:18:21 blockdev_xnvme -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:20:10.530 18:18:21 blockdev_xnvme -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:20:10.530 18:18:21 blockdev_xnvme -- scripts/common.sh@333 -- # local ver1 ver1_l 00:20:10.530 18:18:21 blockdev_xnvme -- scripts/common.sh@334 -- # local ver2 ver2_l 00:20:10.530 18:18:21 blockdev_xnvme -- scripts/common.sh@336 -- # IFS=.-: 00:20:10.530 18:18:21 blockdev_xnvme -- scripts/common.sh@336 -- # read -ra ver1 00:20:10.530 18:18:21 blockdev_xnvme -- scripts/common.sh@337 -- # IFS=.-: 00:20:10.530 18:18:21 blockdev_xnvme -- scripts/common.sh@337 -- # read -ra ver2 00:20:10.530 18:18:21 blockdev_xnvme -- scripts/common.sh@338 -- # local 'op=<' 00:20:10.530 18:18:21 blockdev_xnvme -- scripts/common.sh@340 -- # ver1_l=2 00:20:10.530 18:18:21 blockdev_xnvme -- scripts/common.sh@341 -- # ver2_l=1 00:20:10.530 18:18:21 blockdev_xnvme -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:20:10.530 18:18:21 blockdev_xnvme -- scripts/common.sh@344 -- # case "$op" in 00:20:10.530 18:18:21 blockdev_xnvme -- scripts/common.sh@345 -- # : 1 00:20:10.530 18:18:21 blockdev_xnvme -- scripts/common.sh@364 -- # (( v = 0 )) 00:20:10.530 18:18:21 blockdev_xnvme -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:20:10.530 18:18:21 blockdev_xnvme -- scripts/common.sh@365 -- # decimal 1 00:20:10.530 18:18:21 blockdev_xnvme -- scripts/common.sh@353 -- # local d=1 00:20:10.530 18:18:21 blockdev_xnvme -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:20:10.530 18:18:21 blockdev_xnvme -- scripts/common.sh@355 -- # echo 1 00:20:10.789 18:18:21 blockdev_xnvme -- scripts/common.sh@365 -- # ver1[v]=1 00:20:10.789 18:18:21 blockdev_xnvme -- scripts/common.sh@366 -- # decimal 2 00:20:10.789 18:18:21 blockdev_xnvme -- scripts/common.sh@353 -- # local d=2 00:20:10.789 18:18:21 blockdev_xnvme -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:20:10.789 18:18:21 blockdev_xnvme -- scripts/common.sh@355 -- # echo 2 00:20:10.789 18:18:21 blockdev_xnvme -- scripts/common.sh@366 -- # ver2[v]=2 00:20:10.789 18:18:21 blockdev_xnvme -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:20:10.789 18:18:21 blockdev_xnvme -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:20:10.790 18:18:21 blockdev_xnvme -- scripts/common.sh@368 -- # return 0 00:20:10.790 18:18:21 blockdev_xnvme -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:20:10.790 18:18:21 blockdev_xnvme -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:20:10.790 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:20:10.790 --rc genhtml_branch_coverage=1 00:20:10.790 --rc genhtml_function_coverage=1 00:20:10.790 --rc genhtml_legend=1 00:20:10.790 --rc geninfo_all_blocks=1 00:20:10.790 --rc geninfo_unexecuted_blocks=1 00:20:10.790 00:20:10.790 ' 00:20:10.790 18:18:21 blockdev_xnvme -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:20:10.790 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:20:10.790 --rc genhtml_branch_coverage=1 00:20:10.790 --rc genhtml_function_coverage=1 00:20:10.790 --rc genhtml_legend=1 00:20:10.790 --rc geninfo_all_blocks=1 00:20:10.790 --rc geninfo_unexecuted_blocks=1 00:20:10.790 00:20:10.790 ' 00:20:10.790 18:18:21 blockdev_xnvme -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:20:10.790 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:20:10.790 --rc genhtml_branch_coverage=1 00:20:10.790 --rc genhtml_function_coverage=1 00:20:10.790 --rc genhtml_legend=1 00:20:10.790 --rc geninfo_all_blocks=1 00:20:10.790 --rc geninfo_unexecuted_blocks=1 00:20:10.790 00:20:10.790 ' 00:20:10.790 18:18:21 blockdev_xnvme -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:20:10.790 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:20:10.790 --rc genhtml_branch_coverage=1 00:20:10.790 --rc genhtml_function_coverage=1 00:20:10.790 --rc genhtml_legend=1 00:20:10.790 --rc geninfo_all_blocks=1 00:20:10.790 --rc geninfo_unexecuted_blocks=1 00:20:10.790 00:20:10.790 ' 00:20:10.790 18:18:21 blockdev_xnvme -- bdev/blockdev.sh@10 -- # source /home/vagrant/spdk_repo/spdk/test/bdev/nbd_common.sh 00:20:10.790 18:18:21 blockdev_xnvme -- bdev/nbd_common.sh@6 -- # set -e 00:20:10.790 18:18:21 blockdev_xnvme -- bdev/blockdev.sh@12 -- # rpc_py=rpc_cmd 00:20:10.790 18:18:21 blockdev_xnvme -- bdev/blockdev.sh@13 -- # conf_file=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.json 00:20:10.790 18:18:21 blockdev_xnvme -- bdev/blockdev.sh@14 -- # nonenclosed_conf_file=/home/vagrant/spdk_repo/spdk/test/bdev/nonenclosed.json 00:20:10.790 18:18:21 blockdev_xnvme -- bdev/blockdev.sh@15 -- # nonarray_conf_file=/home/vagrant/spdk_repo/spdk/test/bdev/nonarray.json 00:20:10.790 18:18:21 blockdev_xnvme -- bdev/blockdev.sh@17 -- # export RPC_PIPE_TIMEOUT=30 00:20:10.790 18:18:21 blockdev_xnvme -- bdev/blockdev.sh@17 -- # RPC_PIPE_TIMEOUT=30 00:20:10.790 18:18:21 blockdev_xnvme -- bdev/blockdev.sh@20 -- # : 00:20:10.790 18:18:21 blockdev_xnvme -- bdev/blockdev.sh@707 -- # QOS_DEV_1=Malloc_0 00:20:10.790 18:18:21 blockdev_xnvme -- bdev/blockdev.sh@708 -- # QOS_DEV_2=Null_1 00:20:10.790 18:18:21 blockdev_xnvme -- bdev/blockdev.sh@709 -- # QOS_RUN_TIME=5 00:20:10.790 18:18:21 blockdev_xnvme -- bdev/blockdev.sh@711 -- # uname -s 00:20:10.790 18:18:21 blockdev_xnvme -- bdev/blockdev.sh@711 -- # '[' Linux = Linux ']' 00:20:10.790 18:18:21 blockdev_xnvme -- bdev/blockdev.sh@713 -- # PRE_RESERVED_MEM=0 00:20:10.790 18:18:21 blockdev_xnvme -- bdev/blockdev.sh@719 -- # test_type=xnvme 00:20:10.790 18:18:21 blockdev_xnvme -- bdev/blockdev.sh@720 -- # crypto_device= 00:20:10.790 18:18:21 blockdev_xnvme -- bdev/blockdev.sh@721 -- # dek= 00:20:10.790 18:18:21 blockdev_xnvme -- bdev/blockdev.sh@722 -- # env_ctx= 00:20:10.790 18:18:21 blockdev_xnvme -- bdev/blockdev.sh@723 -- # wait_for_rpc= 00:20:10.790 18:18:21 blockdev_xnvme -- bdev/blockdev.sh@724 -- # '[' -n '' ']' 00:20:10.790 18:18:21 blockdev_xnvme -- bdev/blockdev.sh@727 -- # [[ xnvme == bdev ]] 00:20:10.790 18:18:21 blockdev_xnvme -- bdev/blockdev.sh@727 -- # [[ xnvme == crypto_* ]] 00:20:10.790 18:18:21 blockdev_xnvme -- bdev/blockdev.sh@730 -- # start_spdk_tgt 00:20:10.790 18:18:21 blockdev_xnvme -- bdev/blockdev.sh@47 -- # spdk_tgt_pid=73594 00:20:10.790 18:18:21 blockdev_xnvme -- bdev/blockdev.sh@46 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt '' '' 00:20:10.790 18:18:21 blockdev_xnvme -- bdev/blockdev.sh@48 -- # trap 'killprocess "$spdk_tgt_pid"; exit 1' SIGINT SIGTERM EXIT 00:20:10.790 18:18:21 blockdev_xnvme -- bdev/blockdev.sh@49 -- # waitforlisten 73594 00:20:10.790 18:18:21 blockdev_xnvme -- common/autotest_common.sh@835 -- # '[' -z 73594 ']' 00:20:10.790 18:18:21 blockdev_xnvme -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:20:10.790 18:18:21 blockdev_xnvme -- common/autotest_common.sh@840 -- # local max_retries=100 00:20:10.790 18:18:21 blockdev_xnvme -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:20:10.790 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:20:10.790 18:18:21 blockdev_xnvme -- common/autotest_common.sh@844 -- # xtrace_disable 00:20:10.790 18:18:21 blockdev_xnvme -- common/autotest_common.sh@10 -- # set +x 00:20:10.790 [2024-12-06 18:18:21.247093] Starting SPDK v25.01-pre git sha1 1148849d6 / DPDK 24.03.0 initialization... 00:20:10.790 [2024-12-06 18:18:21.247447] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid73594 ] 00:20:11.049 [2024-12-06 18:18:21.429880] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:20:11.049 [2024-12-06 18:18:21.550648] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:20:11.984 18:18:22 blockdev_xnvme -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:20:11.984 18:18:22 blockdev_xnvme -- common/autotest_common.sh@868 -- # return 0 00:20:11.984 18:18:22 blockdev_xnvme -- bdev/blockdev.sh@731 -- # case "$test_type" in 00:20:11.984 18:18:22 blockdev_xnvme -- bdev/blockdev.sh@766 -- # setup_xnvme_conf 00:20:11.984 18:18:22 blockdev_xnvme -- bdev/blockdev.sh@88 -- # local io_mechanism=io_uring 00:20:11.984 18:18:22 blockdev_xnvme -- bdev/blockdev.sh@89 -- # local nvme nvmes 00:20:11.984 18:18:22 blockdev_xnvme -- bdev/blockdev.sh@91 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:20:12.920 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:20:13.513 0000:00:11.0 (1b36 0010): Already using the nvme driver 00:20:13.513 0000:00:10.0 (1b36 0010): Already using the nvme driver 00:20:13.513 0000:00:12.0 (1b36 0010): Already using the nvme driver 00:20:13.513 0000:00:13.0 (1b36 0010): Already using the nvme driver 00:20:13.513 18:18:23 blockdev_xnvme -- bdev/blockdev.sh@92 -- # get_zoned_devs 00:20:13.513 18:18:23 blockdev_xnvme -- common/autotest_common.sh@1657 -- # zoned_devs=() 00:20:13.513 18:18:23 blockdev_xnvme -- common/autotest_common.sh@1657 -- # local -gA zoned_devs 00:20:13.513 18:18:23 blockdev_xnvme -- common/autotest_common.sh@1658 -- # zoned_ctrls=() 00:20:13.513 18:18:23 blockdev_xnvme -- common/autotest_common.sh@1658 -- # local -A zoned_ctrls 00:20:13.513 18:18:23 blockdev_xnvme -- common/autotest_common.sh@1659 -- # local nvme bdf ns 00:20:13.513 18:18:23 blockdev_xnvme -- common/autotest_common.sh@1668 -- # for nvme in /sys/class/nvme/nvme* 00:20:13.513 18:18:23 blockdev_xnvme -- common/autotest_common.sh@1669 -- # bdf=0000:00:12.0 00:20:13.513 18:18:23 blockdev_xnvme -- common/autotest_common.sh@1670 -- # for ns in "$nvme/"nvme*n* 00:20:13.513 18:18:23 blockdev_xnvme -- common/autotest_common.sh@1671 -- # is_block_zoned nvme0n1 00:20:13.513 18:18:23 blockdev_xnvme -- common/autotest_common.sh@1650 -- # local device=nvme0n1 00:20:13.513 18:18:23 blockdev_xnvme -- common/autotest_common.sh@1652 -- # [[ -e /sys/block/nvme0n1/queue/zoned ]] 00:20:13.513 18:18:23 blockdev_xnvme -- common/autotest_common.sh@1653 -- # [[ none != none ]] 00:20:13.513 18:18:23 blockdev_xnvme -- common/autotest_common.sh@1670 -- # for ns in "$nvme/"nvme*n* 00:20:13.513 18:18:23 blockdev_xnvme -- common/autotest_common.sh@1671 -- # is_block_zoned nvme0n2 00:20:13.513 18:18:23 blockdev_xnvme -- common/autotest_common.sh@1650 -- # local device=nvme0n2 00:20:13.513 18:18:23 blockdev_xnvme -- common/autotest_common.sh@1652 -- # [[ -e /sys/block/nvme0n2/queue/zoned ]] 00:20:13.513 18:18:23 blockdev_xnvme -- common/autotest_common.sh@1653 -- # [[ none != none ]] 00:20:13.513 18:18:23 blockdev_xnvme -- common/autotest_common.sh@1670 -- # for ns in "$nvme/"nvme*n* 00:20:13.513 18:18:23 blockdev_xnvme -- common/autotest_common.sh@1671 -- # is_block_zoned nvme0n3 00:20:13.513 18:18:23 blockdev_xnvme -- common/autotest_common.sh@1650 -- # local device=nvme0n3 00:20:13.513 18:18:23 blockdev_xnvme -- common/autotest_common.sh@1652 -- # [[ -e /sys/block/nvme0n3/queue/zoned ]] 00:20:13.513 18:18:23 blockdev_xnvme -- common/autotest_common.sh@1653 -- # [[ none != none ]] 00:20:13.513 18:18:23 blockdev_xnvme -- common/autotest_common.sh@1668 -- # for nvme in /sys/class/nvme/nvme* 00:20:13.513 18:18:23 blockdev_xnvme -- common/autotest_common.sh@1669 -- # bdf=0000:00:10.0 00:20:13.513 18:18:23 blockdev_xnvme -- common/autotest_common.sh@1670 -- # for ns in "$nvme/"nvme*n* 00:20:13.513 18:18:23 blockdev_xnvme -- common/autotest_common.sh@1671 -- # is_block_zoned nvme1n1 00:20:13.513 18:18:23 blockdev_xnvme -- common/autotest_common.sh@1650 -- # local device=nvme1n1 00:20:13.513 18:18:23 blockdev_xnvme -- common/autotest_common.sh@1652 -- # [[ -e /sys/block/nvme1n1/queue/zoned ]] 00:20:13.513 18:18:23 blockdev_xnvme -- common/autotest_common.sh@1653 -- # [[ none != none ]] 00:20:13.513 18:18:23 blockdev_xnvme -- common/autotest_common.sh@1668 -- # for nvme in /sys/class/nvme/nvme* 00:20:13.513 18:18:23 blockdev_xnvme -- common/autotest_common.sh@1669 -- # bdf=0000:00:13.0 00:20:13.513 18:18:23 blockdev_xnvme -- common/autotest_common.sh@1670 -- # for ns in "$nvme/"nvme*n* 00:20:13.513 18:18:23 blockdev_xnvme -- common/autotest_common.sh@1671 -- # is_block_zoned nvme2c2n1 00:20:13.513 18:18:23 blockdev_xnvme -- common/autotest_common.sh@1650 -- # local device=nvme2c2n1 00:20:13.513 18:18:23 blockdev_xnvme -- common/autotest_common.sh@1652 -- # [[ -e /sys/block/nvme2c2n1/queue/zoned ]] 00:20:13.513 18:18:23 blockdev_xnvme -- common/autotest_common.sh@1653 -- # [[ none != none ]] 00:20:13.513 18:18:23 blockdev_xnvme -- common/autotest_common.sh@1668 -- # for nvme in /sys/class/nvme/nvme* 00:20:13.513 18:18:23 blockdev_xnvme -- common/autotest_common.sh@1669 -- # bdf=0000:00:11.0 00:20:13.513 18:18:23 blockdev_xnvme -- common/autotest_common.sh@1670 -- # for ns in "$nvme/"nvme*n* 00:20:13.513 18:18:23 blockdev_xnvme -- common/autotest_common.sh@1671 -- # is_block_zoned nvme3n1 00:20:13.513 18:18:23 blockdev_xnvme -- common/autotest_common.sh@1650 -- # local device=nvme3n1 00:20:13.513 18:18:23 blockdev_xnvme -- common/autotest_common.sh@1652 -- # [[ -e /sys/block/nvme3n1/queue/zoned ]] 00:20:13.513 18:18:23 blockdev_xnvme -- common/autotest_common.sh@1653 -- # [[ none != none ]] 00:20:13.513 18:18:23 blockdev_xnvme -- bdev/blockdev.sh@94 -- # for nvme in /dev/nvme*n* 00:20:13.513 18:18:23 blockdev_xnvme -- bdev/blockdev.sh@95 -- # [[ -b /dev/nvme0n1 ]] 00:20:13.513 18:18:23 blockdev_xnvme -- bdev/blockdev.sh@95 -- # [[ -z '' ]] 00:20:13.513 18:18:23 blockdev_xnvme -- bdev/blockdev.sh@96 -- # nvmes+=("bdev_xnvme_create $nvme ${nvme##*/} $io_mechanism -c") 00:20:13.513 18:18:23 blockdev_xnvme -- bdev/blockdev.sh@94 -- # for nvme in /dev/nvme*n* 00:20:13.513 18:18:23 blockdev_xnvme -- bdev/blockdev.sh@95 -- # [[ -b /dev/nvme0n2 ]] 00:20:13.513 18:18:23 blockdev_xnvme -- bdev/blockdev.sh@95 -- # [[ -z '' ]] 00:20:13.513 18:18:23 blockdev_xnvme -- bdev/blockdev.sh@96 -- # nvmes+=("bdev_xnvme_create $nvme ${nvme##*/} $io_mechanism -c") 00:20:13.513 18:18:23 blockdev_xnvme -- bdev/blockdev.sh@94 -- # for nvme in /dev/nvme*n* 00:20:13.513 18:18:23 blockdev_xnvme -- bdev/blockdev.sh@95 -- # [[ -b /dev/nvme0n3 ]] 00:20:13.513 18:18:23 blockdev_xnvme -- bdev/blockdev.sh@95 -- # [[ -z '' ]] 00:20:13.513 18:18:23 blockdev_xnvme -- bdev/blockdev.sh@96 -- # nvmes+=("bdev_xnvme_create $nvme ${nvme##*/} $io_mechanism -c") 00:20:13.513 18:18:23 blockdev_xnvme -- bdev/blockdev.sh@94 -- # for nvme in /dev/nvme*n* 00:20:13.513 18:18:23 blockdev_xnvme -- bdev/blockdev.sh@95 -- # [[ -b /dev/nvme1n1 ]] 00:20:13.513 18:18:23 blockdev_xnvme -- bdev/blockdev.sh@95 -- # [[ -z '' ]] 00:20:13.513 18:18:23 blockdev_xnvme -- bdev/blockdev.sh@96 -- # nvmes+=("bdev_xnvme_create $nvme ${nvme##*/} $io_mechanism -c") 00:20:13.513 18:18:23 blockdev_xnvme -- bdev/blockdev.sh@94 -- # for nvme in /dev/nvme*n* 00:20:13.513 18:18:23 blockdev_xnvme -- bdev/blockdev.sh@95 -- # [[ -b /dev/nvme2n1 ]] 00:20:13.513 18:18:23 blockdev_xnvme -- bdev/blockdev.sh@95 -- # [[ -z '' ]] 00:20:13.513 18:18:23 blockdev_xnvme -- bdev/blockdev.sh@96 -- # nvmes+=("bdev_xnvme_create $nvme ${nvme##*/} $io_mechanism -c") 00:20:13.513 18:18:23 blockdev_xnvme -- bdev/blockdev.sh@94 -- # for nvme in /dev/nvme*n* 00:20:13.513 18:18:23 blockdev_xnvme -- bdev/blockdev.sh@95 -- # [[ -b /dev/nvme3n1 ]] 00:20:13.513 18:18:23 blockdev_xnvme -- bdev/blockdev.sh@95 -- # [[ -z '' ]] 00:20:13.513 18:18:23 blockdev_xnvme -- bdev/blockdev.sh@96 -- # nvmes+=("bdev_xnvme_create $nvme ${nvme##*/} $io_mechanism -c") 00:20:13.513 18:18:23 blockdev_xnvme -- bdev/blockdev.sh@99 -- # (( 6 > 0 )) 00:20:13.513 18:18:23 blockdev_xnvme -- bdev/blockdev.sh@100 -- # rpc_cmd 00:20:13.513 18:18:23 blockdev_xnvme -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:13.513 18:18:23 blockdev_xnvme -- common/autotest_common.sh@10 -- # set +x 00:20:13.513 18:18:23 blockdev_xnvme -- bdev/blockdev.sh@100 -- # printf '%s\n' 'bdev_xnvme_create /dev/nvme0n1 nvme0n1 io_uring -c' 'bdev_xnvme_create /dev/nvme0n2 nvme0n2 io_uring -c' 'bdev_xnvme_create /dev/nvme0n3 nvme0n3 io_uring -c' 'bdev_xnvme_create /dev/nvme1n1 nvme1n1 io_uring -c' 'bdev_xnvme_create /dev/nvme2n1 nvme2n1 io_uring -c' 'bdev_xnvme_create /dev/nvme3n1 nvme3n1 io_uring -c' 00:20:13.513 nvme0n1 00:20:13.513 nvme0n2 00:20:13.513 nvme0n3 00:20:13.513 nvme1n1 00:20:13.513 nvme2n1 00:20:13.513 nvme3n1 00:20:13.513 18:18:23 blockdev_xnvme -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:13.513 18:18:23 blockdev_xnvme -- bdev/blockdev.sh@774 -- # rpc_cmd bdev_wait_for_examine 00:20:13.513 18:18:23 blockdev_xnvme -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:13.513 18:18:23 blockdev_xnvme -- common/autotest_common.sh@10 -- # set +x 00:20:13.513 18:18:23 blockdev_xnvme -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:13.514 18:18:23 blockdev_xnvme -- bdev/blockdev.sh@777 -- # cat 00:20:13.514 18:18:23 blockdev_xnvme -- bdev/blockdev.sh@777 -- # rpc_cmd save_subsystem_config -n accel 00:20:13.514 18:18:23 blockdev_xnvme -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:13.514 18:18:23 blockdev_xnvme -- common/autotest_common.sh@10 -- # set +x 00:20:13.514 18:18:23 blockdev_xnvme -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:13.514 18:18:23 blockdev_xnvme -- bdev/blockdev.sh@777 -- # rpc_cmd save_subsystem_config -n bdev 00:20:13.514 18:18:23 blockdev_xnvme -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:13.514 18:18:23 blockdev_xnvme -- common/autotest_common.sh@10 -- # set +x 00:20:13.514 18:18:24 blockdev_xnvme -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:13.514 18:18:24 blockdev_xnvme -- bdev/blockdev.sh@777 -- # rpc_cmd save_subsystem_config -n iobuf 00:20:13.514 18:18:24 blockdev_xnvme -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:13.514 18:18:24 blockdev_xnvme -- common/autotest_common.sh@10 -- # set +x 00:20:13.514 18:18:24 blockdev_xnvme -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:13.514 18:18:24 blockdev_xnvme -- bdev/blockdev.sh@785 -- # mapfile -t bdevs 00:20:13.514 18:18:24 blockdev_xnvme -- bdev/blockdev.sh@785 -- # jq -r '.[] | select(.claimed == false)' 00:20:13.514 18:18:24 blockdev_xnvme -- bdev/blockdev.sh@785 -- # rpc_cmd bdev_get_bdevs 00:20:13.793 18:18:24 blockdev_xnvme -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:13.793 18:18:24 blockdev_xnvme -- common/autotest_common.sh@10 -- # set +x 00:20:13.793 18:18:24 blockdev_xnvme -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:13.793 18:18:24 blockdev_xnvme -- bdev/blockdev.sh@786 -- # mapfile -t bdevs_name 00:20:13.793 18:18:24 blockdev_xnvme -- bdev/blockdev.sh@786 -- # jq -r .name 00:20:13.794 18:18:24 blockdev_xnvme -- bdev/blockdev.sh@786 -- # printf '%s\n' '{' ' "name": "nvme0n1",' ' "aliases": [' ' "b6796373-74a1-4cb3-b3b0-9350b52f0ea9"' ' ],' ' "product_name": "xNVMe bdev",' ' "block_size": 4096,' ' "num_blocks": 1048576,' ' "uuid": "b6796373-74a1-4cb3-b3b0-9350b52f0ea9",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": false,' ' "flush": false,' ' "reset": false,' ' "nvme_admin": false,' ' "nvme_io": false,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": false,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": false,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {}' '}' '{' ' "name": "nvme0n2",' ' "aliases": [' ' "80e9df41-eef8-490e-a1b6-12781870be77"' ' ],' ' "product_name": "xNVMe bdev",' ' "block_size": 4096,' ' "num_blocks": 1048576,' ' "uuid": "80e9df41-eef8-490e-a1b6-12781870be77",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": false,' ' "flush": false,' ' "reset": false,' ' "nvme_admin": false,' ' "nvme_io": false,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": false,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": false,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {}' '}' '{' ' "name": "nvme0n3",' ' "aliases": [' ' "533f7a2a-b5d1-4152-8bd2-40e5d0d183da"' ' ],' ' "product_name": "xNVMe bdev",' ' "block_size": 4096,' ' "num_blocks": 1048576,' ' "uuid": "533f7a2a-b5d1-4152-8bd2-40e5d0d183da",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": false,' ' "flush": false,' ' "reset": false,' ' "nvme_admin": false,' ' "nvme_io": false,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": false,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": false,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {}' '}' '{' ' "name": "nvme1n1",' ' "aliases": [' ' "15da645c-8a2e-4caa-a2a9-eae98da97aa5"' ' ],' ' "product_name": "xNVMe bdev",' ' "block_size": 4096,' ' "num_blocks": 1548666,' ' "uuid": "15da645c-8a2e-4caa-a2a9-eae98da97aa5",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": false,' ' "flush": false,' ' "reset": false,' ' "nvme_admin": false,' ' "nvme_io": false,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": false,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": false,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {}' '}' '{' ' "name": "nvme2n1",' ' "aliases": [' ' "9b88be68-2d87-4199-a023-e04940bfdb8f"' ' ],' ' "product_name": "xNVMe bdev",' ' "block_size": 4096,' ' "num_blocks": 262144,' ' "uuid": "9b88be68-2d87-4199-a023-e04940bfdb8f",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": false,' ' "flush": false,' ' "reset": false,' ' "nvme_admin": false,' ' "nvme_io": false,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": false,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": false,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {}' '}' '{' ' "name": "nvme3n1",' ' "aliases": [' ' "1931a7b9-2658-4f5a-a017-20deec00883a"' ' ],' ' "product_name": "xNVMe bdev",' ' "block_size": 4096,' ' "num_blocks": 1310720,' ' "uuid": "1931a7b9-2658-4f5a-a017-20deec00883a",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": false,' ' "flush": false,' ' "reset": false,' ' "nvme_admin": false,' ' "nvme_io": false,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": false,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": false,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {}' '}' 00:20:13.794 18:18:24 blockdev_xnvme -- bdev/blockdev.sh@787 -- # bdev_list=("${bdevs_name[@]}") 00:20:13.794 18:18:24 blockdev_xnvme -- bdev/blockdev.sh@789 -- # hello_world_bdev=nvme0n1 00:20:13.794 18:18:24 blockdev_xnvme -- bdev/blockdev.sh@790 -- # trap - SIGINT SIGTERM EXIT 00:20:13.794 18:18:24 blockdev_xnvme -- bdev/blockdev.sh@791 -- # killprocess 73594 00:20:13.794 18:18:24 blockdev_xnvme -- common/autotest_common.sh@954 -- # '[' -z 73594 ']' 00:20:13.794 18:18:24 blockdev_xnvme -- common/autotest_common.sh@958 -- # kill -0 73594 00:20:13.794 18:18:24 blockdev_xnvme -- common/autotest_common.sh@959 -- # uname 00:20:13.794 18:18:24 blockdev_xnvme -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:20:13.794 18:18:24 blockdev_xnvme -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 73594 00:20:13.794 killing process with pid 73594 00:20:13.794 18:18:24 blockdev_xnvme -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:20:13.794 18:18:24 blockdev_xnvme -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:20:13.794 18:18:24 blockdev_xnvme -- common/autotest_common.sh@972 -- # echo 'killing process with pid 73594' 00:20:13.794 18:18:24 blockdev_xnvme -- common/autotest_common.sh@973 -- # kill 73594 00:20:13.794 18:18:24 blockdev_xnvme -- common/autotest_common.sh@978 -- # wait 73594 00:20:16.325 18:18:26 blockdev_xnvme -- bdev/blockdev.sh@795 -- # trap cleanup SIGINT SIGTERM EXIT 00:20:16.325 18:18:26 blockdev_xnvme -- bdev/blockdev.sh@797 -- # run_test bdev_hello_world /home/vagrant/spdk_repo/spdk/build/examples/hello_bdev --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -b nvme0n1 '' 00:20:16.325 18:18:26 blockdev_xnvme -- common/autotest_common.sh@1105 -- # '[' 7 -le 1 ']' 00:20:16.325 18:18:26 blockdev_xnvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:20:16.325 18:18:26 blockdev_xnvme -- common/autotest_common.sh@10 -- # set +x 00:20:16.325 ************************************ 00:20:16.325 START TEST bdev_hello_world 00:20:16.325 ************************************ 00:20:16.325 18:18:26 blockdev_xnvme.bdev_hello_world -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/build/examples/hello_bdev --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -b nvme0n1 '' 00:20:16.325 [2024-12-06 18:18:26.793341] Starting SPDK v25.01-pre git sha1 1148849d6 / DPDK 24.03.0 initialization... 00:20:16.325 [2024-12-06 18:18:26.793652] [ DPDK EAL parameters: hello_bdev --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid73895 ] 00:20:16.584 [2024-12-06 18:18:26.986101] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:20:16.584 [2024-12-06 18:18:27.101623] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:20:17.152 [2024-12-06 18:18:27.545382] hello_bdev.c: 222:hello_start: *NOTICE*: Successfully started the application 00:20:17.152 [2024-12-06 18:18:27.545572] hello_bdev.c: 231:hello_start: *NOTICE*: Opening the bdev nvme0n1 00:20:17.152 [2024-12-06 18:18:27.545600] hello_bdev.c: 244:hello_start: *NOTICE*: Opening io channel 00:20:17.152 [2024-12-06 18:18:27.547701] hello_bdev.c: 138:hello_write: *NOTICE*: Writing to the bdev 00:20:17.152 [2024-12-06 18:18:27.548054] hello_bdev.c: 117:write_complete: *NOTICE*: bdev io write completed successfully 00:20:17.152 [2024-12-06 18:18:27.548078] hello_bdev.c: 84:hello_read: *NOTICE*: Reading io 00:20:17.152 [2024-12-06 18:18:27.548390] hello_bdev.c: 65:read_complete: *NOTICE*: Read string from bdev : Hello World! 00:20:17.152 00:20:17.152 [2024-12-06 18:18:27.548422] hello_bdev.c: 74:read_complete: *NOTICE*: Stopping app 00:20:18.531 00:20:18.531 real 0m1.990s 00:20:18.531 user 0m1.608s 00:20:18.531 sys 0m0.265s 00:20:18.531 ************************************ 00:20:18.531 END TEST bdev_hello_world 00:20:18.531 ************************************ 00:20:18.531 18:18:28 blockdev_xnvme.bdev_hello_world -- common/autotest_common.sh@1130 -- # xtrace_disable 00:20:18.531 18:18:28 blockdev_xnvme.bdev_hello_world -- common/autotest_common.sh@10 -- # set +x 00:20:18.531 18:18:28 blockdev_xnvme -- bdev/blockdev.sh@798 -- # run_test bdev_bounds bdev_bounds '' 00:20:18.531 18:18:28 blockdev_xnvme -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:20:18.531 18:18:28 blockdev_xnvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:20:18.531 18:18:28 blockdev_xnvme -- common/autotest_common.sh@10 -- # set +x 00:20:18.531 ************************************ 00:20:18.531 START TEST bdev_bounds 00:20:18.531 ************************************ 00:20:18.531 18:18:28 blockdev_xnvme.bdev_bounds -- common/autotest_common.sh@1129 -- # bdev_bounds '' 00:20:18.531 18:18:28 blockdev_xnvme.bdev_bounds -- bdev/blockdev.sh@289 -- # bdevio_pid=73936 00:20:18.531 18:18:28 blockdev_xnvme.bdev_bounds -- bdev/blockdev.sh@288 -- # /home/vagrant/spdk_repo/spdk/test/bdev/bdevio/bdevio -w -s 0 --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json '' 00:20:18.531 18:18:28 blockdev_xnvme.bdev_bounds -- bdev/blockdev.sh@290 -- # trap 'cleanup; killprocess $bdevio_pid; exit 1' SIGINT SIGTERM EXIT 00:20:18.531 18:18:28 blockdev_xnvme.bdev_bounds -- bdev/blockdev.sh@291 -- # echo 'Process bdevio pid: 73936' 00:20:18.531 Process bdevio pid: 73936 00:20:18.531 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:20:18.531 18:18:28 blockdev_xnvme.bdev_bounds -- bdev/blockdev.sh@292 -- # waitforlisten 73936 00:20:18.531 18:18:28 blockdev_xnvme.bdev_bounds -- common/autotest_common.sh@835 -- # '[' -z 73936 ']' 00:20:18.532 18:18:28 blockdev_xnvme.bdev_bounds -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:20:18.532 18:18:28 blockdev_xnvme.bdev_bounds -- common/autotest_common.sh@840 -- # local max_retries=100 00:20:18.532 18:18:28 blockdev_xnvme.bdev_bounds -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:20:18.532 18:18:28 blockdev_xnvme.bdev_bounds -- common/autotest_common.sh@844 -- # xtrace_disable 00:20:18.532 18:18:28 blockdev_xnvme.bdev_bounds -- common/autotest_common.sh@10 -- # set +x 00:20:18.532 [2024-12-06 18:18:28.852070] Starting SPDK v25.01-pre git sha1 1148849d6 / DPDK 24.03.0 initialization... 00:20:18.532 [2024-12-06 18:18:28.852195] [ DPDK EAL parameters: bdevio --no-shconf -c 0x7 -m 0 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid73936 ] 00:20:18.532 [2024-12-06 18:18:29.034996] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:20:18.791 [2024-12-06 18:18:29.155469] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:20:18.791 [2024-12-06 18:18:29.155627] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:20:18.791 [2024-12-06 18:18:29.155655] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:20:19.359 18:18:29 blockdev_xnvme.bdev_bounds -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:20:19.359 18:18:29 blockdev_xnvme.bdev_bounds -- common/autotest_common.sh@868 -- # return 0 00:20:19.359 18:18:29 blockdev_xnvme.bdev_bounds -- bdev/blockdev.sh@293 -- # /home/vagrant/spdk_repo/spdk/test/bdev/bdevio/tests.py perform_tests 00:20:19.359 I/O targets: 00:20:19.359 nvme0n1: 1048576 blocks of 4096 bytes (4096 MiB) 00:20:19.359 nvme0n2: 1048576 blocks of 4096 bytes (4096 MiB) 00:20:19.359 nvme0n3: 1048576 blocks of 4096 bytes (4096 MiB) 00:20:19.359 nvme1n1: 1548666 blocks of 4096 bytes (6050 MiB) 00:20:19.359 nvme2n1: 262144 blocks of 4096 bytes (1024 MiB) 00:20:19.359 nvme3n1: 1310720 blocks of 4096 bytes (5120 MiB) 00:20:19.359 00:20:19.359 00:20:19.359 CUnit - A unit testing framework for C - Version 2.1-3 00:20:19.359 http://cunit.sourceforge.net/ 00:20:19.359 00:20:19.359 00:20:19.359 Suite: bdevio tests on: nvme3n1 00:20:19.359 Test: blockdev write read block ...passed 00:20:19.359 Test: blockdev write zeroes read block ...passed 00:20:19.359 Test: blockdev write zeroes read no split ...passed 00:20:19.359 Test: blockdev write zeroes read split ...passed 00:20:19.359 Test: blockdev write zeroes read split partial ...passed 00:20:19.359 Test: blockdev reset ...passed 00:20:19.359 Test: blockdev write read 8 blocks ...passed 00:20:19.359 Test: blockdev write read size > 128k ...passed 00:20:19.359 Test: blockdev write read invalid size ...passed 00:20:19.359 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:20:19.359 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:20:19.359 Test: blockdev write read max offset ...passed 00:20:19.359 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:20:19.359 Test: blockdev writev readv 8 blocks ...passed 00:20:19.359 Test: blockdev writev readv 30 x 1block ...passed 00:20:19.359 Test: blockdev writev readv block ...passed 00:20:19.359 Test: blockdev writev readv size > 128k ...passed 00:20:19.359 Test: blockdev writev readv size > 128k in two iovs ...passed 00:20:19.360 Test: blockdev comparev and writev ...passed 00:20:19.360 Test: blockdev nvme passthru rw ...passed 00:20:19.360 Test: blockdev nvme passthru vendor specific ...passed 00:20:19.360 Test: blockdev nvme admin passthru ...passed 00:20:19.360 Test: blockdev copy ...passed 00:20:19.360 Suite: bdevio tests on: nvme2n1 00:20:19.360 Test: blockdev write read block ...passed 00:20:19.360 Test: blockdev write zeroes read block ...passed 00:20:19.360 Test: blockdev write zeroes read no split ...passed 00:20:19.360 Test: blockdev write zeroes read split ...passed 00:20:19.619 Test: blockdev write zeroes read split partial ...passed 00:20:19.619 Test: blockdev reset ...passed 00:20:19.619 Test: blockdev write read 8 blocks ...passed 00:20:19.619 Test: blockdev write read size > 128k ...passed 00:20:19.619 Test: blockdev write read invalid size ...passed 00:20:19.619 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:20:19.619 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:20:19.619 Test: blockdev write read max offset ...passed 00:20:19.619 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:20:19.619 Test: blockdev writev readv 8 blocks ...passed 00:20:19.619 Test: blockdev writev readv 30 x 1block ...passed 00:20:19.619 Test: blockdev writev readv block ...passed 00:20:19.619 Test: blockdev writev readv size > 128k ...passed 00:20:19.619 Test: blockdev writev readv size > 128k in two iovs ...passed 00:20:19.619 Test: blockdev comparev and writev ...passed 00:20:19.619 Test: blockdev nvme passthru rw ...passed 00:20:19.619 Test: blockdev nvme passthru vendor specific ...passed 00:20:19.619 Test: blockdev nvme admin passthru ...passed 00:20:19.619 Test: blockdev copy ...passed 00:20:19.619 Suite: bdevio tests on: nvme1n1 00:20:19.619 Test: blockdev write read block ...passed 00:20:19.619 Test: blockdev write zeroes read block ...passed 00:20:19.619 Test: blockdev write zeroes read no split ...passed 00:20:19.619 Test: blockdev write zeroes read split ...passed 00:20:19.619 Test: blockdev write zeroes read split partial ...passed 00:20:19.619 Test: blockdev reset ...passed 00:20:19.619 Test: blockdev write read 8 blocks ...passed 00:20:19.619 Test: blockdev write read size > 128k ...passed 00:20:19.619 Test: blockdev write read invalid size ...passed 00:20:19.619 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:20:19.619 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:20:19.619 Test: blockdev write read max offset ...passed 00:20:19.619 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:20:19.619 Test: blockdev writev readv 8 blocks ...passed 00:20:19.619 Test: blockdev writev readv 30 x 1block ...passed 00:20:19.619 Test: blockdev writev readv block ...passed 00:20:19.619 Test: blockdev writev readv size > 128k ...passed 00:20:19.619 Test: blockdev writev readv size > 128k in two iovs ...passed 00:20:19.619 Test: blockdev comparev and writev ...passed 00:20:19.619 Test: blockdev nvme passthru rw ...passed 00:20:19.619 Test: blockdev nvme passthru vendor specific ...passed 00:20:19.619 Test: blockdev nvme admin passthru ...passed 00:20:19.619 Test: blockdev copy ...passed 00:20:19.619 Suite: bdevio tests on: nvme0n3 00:20:19.619 Test: blockdev write read block ...passed 00:20:19.619 Test: blockdev write zeroes read block ...passed 00:20:19.619 Test: blockdev write zeroes read no split ...passed 00:20:19.619 Test: blockdev write zeroes read split ...passed 00:20:19.619 Test: blockdev write zeroes read split partial ...passed 00:20:19.619 Test: blockdev reset ...passed 00:20:19.619 Test: blockdev write read 8 blocks ...passed 00:20:19.619 Test: blockdev write read size > 128k ...passed 00:20:19.619 Test: blockdev write read invalid size ...passed 00:20:19.619 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:20:19.619 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:20:19.619 Test: blockdev write read max offset ...passed 00:20:19.619 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:20:19.619 Test: blockdev writev readv 8 blocks ...passed 00:20:19.619 Test: blockdev writev readv 30 x 1block ...passed 00:20:19.619 Test: blockdev writev readv block ...passed 00:20:19.619 Test: blockdev writev readv size > 128k ...passed 00:20:19.619 Test: blockdev writev readv size > 128k in two iovs ...passed 00:20:19.619 Test: blockdev comparev and writev ...passed 00:20:19.619 Test: blockdev nvme passthru rw ...passed 00:20:19.619 Test: blockdev nvme passthru vendor specific ...passed 00:20:19.619 Test: blockdev nvme admin passthru ...passed 00:20:19.619 Test: blockdev copy ...passed 00:20:19.619 Suite: bdevio tests on: nvme0n2 00:20:19.620 Test: blockdev write read block ...passed 00:20:19.620 Test: blockdev write zeroes read block ...passed 00:20:19.620 Test: blockdev write zeroes read no split ...passed 00:20:19.620 Test: blockdev write zeroes read split ...passed 00:20:19.879 Test: blockdev write zeroes read split partial ...passed 00:20:19.879 Test: blockdev reset ...passed 00:20:19.879 Test: blockdev write read 8 blocks ...passed 00:20:19.879 Test: blockdev write read size > 128k ...passed 00:20:19.879 Test: blockdev write read invalid size ...passed 00:20:19.879 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:20:19.879 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:20:19.879 Test: blockdev write read max offset ...passed 00:20:19.879 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:20:19.879 Test: blockdev writev readv 8 blocks ...passed 00:20:19.879 Test: blockdev writev readv 30 x 1block ...passed 00:20:19.879 Test: blockdev writev readv block ...passed 00:20:19.879 Test: blockdev writev readv size > 128k ...passed 00:20:19.879 Test: blockdev writev readv size > 128k in two iovs ...passed 00:20:19.879 Test: blockdev comparev and writev ...passed 00:20:19.879 Test: blockdev nvme passthru rw ...passed 00:20:19.879 Test: blockdev nvme passthru vendor specific ...passed 00:20:19.879 Test: blockdev nvme admin passthru ...passed 00:20:19.879 Test: blockdev copy ...passed 00:20:19.879 Suite: bdevio tests on: nvme0n1 00:20:19.879 Test: blockdev write read block ...passed 00:20:19.879 Test: blockdev write zeroes read block ...passed 00:20:19.879 Test: blockdev write zeroes read no split ...passed 00:20:19.879 Test: blockdev write zeroes read split ...passed 00:20:19.879 Test: blockdev write zeroes read split partial ...passed 00:20:19.879 Test: blockdev reset ...passed 00:20:19.879 Test: blockdev write read 8 blocks ...passed 00:20:19.879 Test: blockdev write read size > 128k ...passed 00:20:19.879 Test: blockdev write read invalid size ...passed 00:20:19.879 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:20:19.879 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:20:19.879 Test: blockdev write read max offset ...passed 00:20:19.879 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:20:19.879 Test: blockdev writev readv 8 blocks ...passed 00:20:19.879 Test: blockdev writev readv 30 x 1block ...passed 00:20:19.879 Test: blockdev writev readv block ...passed 00:20:19.879 Test: blockdev writev readv size > 128k ...passed 00:20:19.879 Test: blockdev writev readv size > 128k in two iovs ...passed 00:20:19.879 Test: blockdev comparev and writev ...passed 00:20:19.879 Test: blockdev nvme passthru rw ...passed 00:20:19.879 Test: blockdev nvme passthru vendor specific ...passed 00:20:19.879 Test: blockdev nvme admin passthru ...passed 00:20:19.879 Test: blockdev copy ...passed 00:20:19.879 00:20:19.879 Run Summary: Type Total Ran Passed Failed Inactive 00:20:19.879 suites 6 6 n/a 0 0 00:20:19.879 tests 138 138 138 0 0 00:20:19.879 asserts 780 780 780 0 n/a 00:20:19.879 00:20:19.879 Elapsed time = 1.326 seconds 00:20:19.879 0 00:20:19.879 18:18:30 blockdev_xnvme.bdev_bounds -- bdev/blockdev.sh@294 -- # killprocess 73936 00:20:19.879 18:18:30 blockdev_xnvme.bdev_bounds -- common/autotest_common.sh@954 -- # '[' -z 73936 ']' 00:20:19.879 18:18:30 blockdev_xnvme.bdev_bounds -- common/autotest_common.sh@958 -- # kill -0 73936 00:20:19.879 18:18:30 blockdev_xnvme.bdev_bounds -- common/autotest_common.sh@959 -- # uname 00:20:19.879 18:18:30 blockdev_xnvme.bdev_bounds -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:20:19.879 18:18:30 blockdev_xnvme.bdev_bounds -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 73936 00:20:19.879 killing process with pid 73936 00:20:19.879 18:18:30 blockdev_xnvme.bdev_bounds -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:20:19.879 18:18:30 blockdev_xnvme.bdev_bounds -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:20:19.879 18:18:30 blockdev_xnvme.bdev_bounds -- common/autotest_common.sh@972 -- # echo 'killing process with pid 73936' 00:20:19.879 18:18:30 blockdev_xnvme.bdev_bounds -- common/autotest_common.sh@973 -- # kill 73936 00:20:19.879 18:18:30 blockdev_xnvme.bdev_bounds -- common/autotest_common.sh@978 -- # wait 73936 00:20:21.259 18:18:31 blockdev_xnvme.bdev_bounds -- bdev/blockdev.sh@295 -- # trap - SIGINT SIGTERM EXIT 00:20:21.259 00:20:21.259 real 0m2.728s 00:20:21.259 user 0m6.816s 00:20:21.259 sys 0m0.384s 00:20:21.259 18:18:31 blockdev_xnvme.bdev_bounds -- common/autotest_common.sh@1130 -- # xtrace_disable 00:20:21.259 18:18:31 blockdev_xnvme.bdev_bounds -- common/autotest_common.sh@10 -- # set +x 00:20:21.259 ************************************ 00:20:21.259 END TEST bdev_bounds 00:20:21.259 ************************************ 00:20:21.259 18:18:31 blockdev_xnvme -- bdev/blockdev.sh@799 -- # run_test bdev_nbd nbd_function_test /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json 'nvme0n1 nvme0n2 nvme0n3 nvme1n1 nvme2n1 nvme3n1' '' 00:20:21.259 18:18:31 blockdev_xnvme -- common/autotest_common.sh@1105 -- # '[' 5 -le 1 ']' 00:20:21.259 18:18:31 blockdev_xnvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:20:21.259 18:18:31 blockdev_xnvme -- common/autotest_common.sh@10 -- # set +x 00:20:21.259 ************************************ 00:20:21.259 START TEST bdev_nbd 00:20:21.259 ************************************ 00:20:21.259 18:18:31 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@1129 -- # nbd_function_test /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json 'nvme0n1 nvme0n2 nvme0n3 nvme1n1 nvme2n1 nvme3n1' '' 00:20:21.259 18:18:31 blockdev_xnvme.bdev_nbd -- bdev/blockdev.sh@299 -- # uname -s 00:20:21.259 18:18:31 blockdev_xnvme.bdev_nbd -- bdev/blockdev.sh@299 -- # [[ Linux == Linux ]] 00:20:21.259 18:18:31 blockdev_xnvme.bdev_nbd -- bdev/blockdev.sh@301 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:20:21.259 18:18:31 blockdev_xnvme.bdev_nbd -- bdev/blockdev.sh@302 -- # local conf=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.json 00:20:21.259 18:18:31 blockdev_xnvme.bdev_nbd -- bdev/blockdev.sh@303 -- # bdev_all=('nvme0n1' 'nvme0n2' 'nvme0n3' 'nvme1n1' 'nvme2n1' 'nvme3n1') 00:20:21.259 18:18:31 blockdev_xnvme.bdev_nbd -- bdev/blockdev.sh@303 -- # local bdev_all 00:20:21.259 18:18:31 blockdev_xnvme.bdev_nbd -- bdev/blockdev.sh@304 -- # local bdev_num=6 00:20:21.259 18:18:31 blockdev_xnvme.bdev_nbd -- bdev/blockdev.sh@308 -- # [[ -e /sys/module/nbd ]] 00:20:21.259 18:18:31 blockdev_xnvme.bdev_nbd -- bdev/blockdev.sh@310 -- # nbd_all=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13' '/dev/nbd14' '/dev/nbd15' '/dev/nbd2' '/dev/nbd3' '/dev/nbd4' '/dev/nbd5' '/dev/nbd6' '/dev/nbd7' '/dev/nbd8' '/dev/nbd9') 00:20:21.259 18:18:31 blockdev_xnvme.bdev_nbd -- bdev/blockdev.sh@310 -- # local nbd_all 00:20:21.259 18:18:31 blockdev_xnvme.bdev_nbd -- bdev/blockdev.sh@311 -- # bdev_num=6 00:20:21.259 18:18:31 blockdev_xnvme.bdev_nbd -- bdev/blockdev.sh@313 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13') 00:20:21.259 18:18:31 blockdev_xnvme.bdev_nbd -- bdev/blockdev.sh@313 -- # local nbd_list 00:20:21.259 18:18:31 blockdev_xnvme.bdev_nbd -- bdev/blockdev.sh@314 -- # bdev_list=('nvme0n1' 'nvme0n2' 'nvme0n3' 'nvme1n1' 'nvme2n1' 'nvme3n1') 00:20:21.259 18:18:31 blockdev_xnvme.bdev_nbd -- bdev/blockdev.sh@314 -- # local bdev_list 00:20:21.259 18:18:31 blockdev_xnvme.bdev_nbd -- bdev/blockdev.sh@317 -- # nbd_pid=73993 00:20:21.259 18:18:31 blockdev_xnvme.bdev_nbd -- bdev/blockdev.sh@318 -- # trap 'cleanup; killprocess $nbd_pid' SIGINT SIGTERM EXIT 00:20:21.259 18:18:31 blockdev_xnvme.bdev_nbd -- bdev/blockdev.sh@319 -- # waitforlisten 73993 /var/tmp/spdk-nbd.sock 00:20:21.259 18:18:31 blockdev_xnvme.bdev_nbd -- bdev/blockdev.sh@316 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -r /var/tmp/spdk-nbd.sock -i 0 --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json '' 00:20:21.259 18:18:31 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@835 -- # '[' -z 73993 ']' 00:20:21.259 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:20:21.259 18:18:31 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:20:21.259 18:18:31 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@840 -- # local max_retries=100 00:20:21.259 18:18:31 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:20:21.259 18:18:31 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@844 -- # xtrace_disable 00:20:21.259 18:18:31 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@10 -- # set +x 00:20:21.259 [2024-12-06 18:18:31.666274] Starting SPDK v25.01-pre git sha1 1148849d6 / DPDK 24.03.0 initialization... 00:20:21.259 [2024-12-06 18:18:31.666861] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:20:21.518 [2024-12-06 18:18:31.846578] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:20:21.518 [2024-12-06 18:18:31.955038] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:20:22.085 18:18:32 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:20:22.085 18:18:32 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@868 -- # return 0 00:20:22.085 18:18:32 blockdev_xnvme.bdev_nbd -- bdev/blockdev.sh@321 -- # nbd_rpc_start_stop_verify /var/tmp/spdk-nbd.sock 'nvme0n1 nvme0n2 nvme0n3 nvme1n1 nvme2n1 nvme3n1' 00:20:22.085 18:18:32 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@113 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:20:22.085 18:18:32 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@114 -- # bdev_list=('nvme0n1' 'nvme0n2' 'nvme0n3' 'nvme1n1' 'nvme2n1' 'nvme3n1') 00:20:22.085 18:18:32 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@114 -- # local bdev_list 00:20:22.085 18:18:32 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@116 -- # nbd_start_disks_without_nbd_idx /var/tmp/spdk-nbd.sock 'nvme0n1 nvme0n2 nvme0n3 nvme1n1 nvme2n1 nvme3n1' 00:20:22.085 18:18:32 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@22 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:20:22.086 18:18:32 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@23 -- # bdev_list=('nvme0n1' 'nvme0n2' 'nvme0n3' 'nvme1n1' 'nvme2n1' 'nvme3n1') 00:20:22.086 18:18:32 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@23 -- # local bdev_list 00:20:22.086 18:18:32 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@24 -- # local i 00:20:22.086 18:18:32 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@25 -- # local nbd_device 00:20:22.086 18:18:32 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i = 0 )) 00:20:22.086 18:18:32 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 6 )) 00:20:22.086 18:18:32 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk nvme0n1 00:20:22.345 18:18:32 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd0 00:20:22.345 18:18:32 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@30 -- # basename /dev/nbd0 00:20:22.345 18:18:32 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@30 -- # waitfornbd nbd0 00:20:22.345 18:18:32 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@872 -- # local nbd_name=nbd0 00:20:22.345 18:18:32 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@873 -- # local i 00:20:22.345 18:18:32 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:20:22.345 18:18:32 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:20:22.345 18:18:32 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@876 -- # grep -q -w nbd0 /proc/partitions 00:20:22.345 18:18:32 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@877 -- # break 00:20:22.345 18:18:32 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:20:22.345 18:18:32 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:20:22.345 18:18:32 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@889 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:20:22.345 1+0 records in 00:20:22.345 1+0 records out 00:20:22.345 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000917133 s, 4.5 MB/s 00:20:22.345 18:18:32 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:20:22.345 18:18:32 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@890 -- # size=4096 00:20:22.345 18:18:32 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:20:22.345 18:18:32 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:20:22.345 18:18:32 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@893 -- # return 0 00:20:22.345 18:18:32 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:20:22.345 18:18:32 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 6 )) 00:20:22.345 18:18:32 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk nvme0n2 00:20:22.604 18:18:32 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd1 00:20:22.604 18:18:32 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@30 -- # basename /dev/nbd1 00:20:22.604 18:18:32 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@30 -- # waitfornbd nbd1 00:20:22.604 18:18:32 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@872 -- # local nbd_name=nbd1 00:20:22.604 18:18:32 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@873 -- # local i 00:20:22.604 18:18:32 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:20:22.604 18:18:32 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:20:22.604 18:18:32 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@876 -- # grep -q -w nbd1 /proc/partitions 00:20:22.604 18:18:32 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@877 -- # break 00:20:22.604 18:18:32 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:20:22.604 18:18:32 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:20:22.604 18:18:33 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@889 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:20:22.604 1+0 records in 00:20:22.604 1+0 records out 00:20:22.605 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000443603 s, 9.2 MB/s 00:20:22.605 18:18:33 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:20:22.605 18:18:33 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@890 -- # size=4096 00:20:22.605 18:18:33 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:20:22.605 18:18:33 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:20:22.605 18:18:33 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@893 -- # return 0 00:20:22.605 18:18:33 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:20:22.605 18:18:33 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 6 )) 00:20:22.605 18:18:33 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk nvme0n3 00:20:22.863 18:18:33 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd2 00:20:22.864 18:18:33 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@30 -- # basename /dev/nbd2 00:20:22.864 18:18:33 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@30 -- # waitfornbd nbd2 00:20:22.864 18:18:33 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@872 -- # local nbd_name=nbd2 00:20:22.864 18:18:33 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@873 -- # local i 00:20:22.864 18:18:33 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:20:22.864 18:18:33 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:20:22.864 18:18:33 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@876 -- # grep -q -w nbd2 /proc/partitions 00:20:22.864 18:18:33 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@877 -- # break 00:20:22.864 18:18:33 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:20:22.864 18:18:33 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:20:22.864 18:18:33 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@889 -- # dd if=/dev/nbd2 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:20:22.864 1+0 records in 00:20:22.864 1+0 records out 00:20:22.864 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000695384 s, 5.9 MB/s 00:20:22.864 18:18:33 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:20:22.864 18:18:33 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@890 -- # size=4096 00:20:22.864 18:18:33 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:20:22.864 18:18:33 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:20:22.864 18:18:33 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@893 -- # return 0 00:20:22.864 18:18:33 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:20:22.864 18:18:33 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 6 )) 00:20:22.864 18:18:33 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk nvme1n1 00:20:23.123 18:18:33 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd3 00:20:23.123 18:18:33 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@30 -- # basename /dev/nbd3 00:20:23.123 18:18:33 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@30 -- # waitfornbd nbd3 00:20:23.123 18:18:33 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@872 -- # local nbd_name=nbd3 00:20:23.123 18:18:33 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@873 -- # local i 00:20:23.123 18:18:33 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:20:23.123 18:18:33 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:20:23.123 18:18:33 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@876 -- # grep -q -w nbd3 /proc/partitions 00:20:23.123 18:18:33 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@877 -- # break 00:20:23.123 18:18:33 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:20:23.123 18:18:33 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:20:23.123 18:18:33 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@889 -- # dd if=/dev/nbd3 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:20:23.123 1+0 records in 00:20:23.123 1+0 records out 00:20:23.123 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000692817 s, 5.9 MB/s 00:20:23.123 18:18:33 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:20:23.123 18:18:33 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@890 -- # size=4096 00:20:23.124 18:18:33 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:20:23.124 18:18:33 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:20:23.124 18:18:33 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@893 -- # return 0 00:20:23.124 18:18:33 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:20:23.124 18:18:33 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 6 )) 00:20:23.124 18:18:33 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk nvme2n1 00:20:23.383 18:18:33 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd4 00:20:23.383 18:18:33 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@30 -- # basename /dev/nbd4 00:20:23.383 18:18:33 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@30 -- # waitfornbd nbd4 00:20:23.383 18:18:33 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@872 -- # local nbd_name=nbd4 00:20:23.383 18:18:33 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@873 -- # local i 00:20:23.383 18:18:33 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:20:23.383 18:18:33 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:20:23.383 18:18:33 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@876 -- # grep -q -w nbd4 /proc/partitions 00:20:23.383 18:18:33 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@877 -- # break 00:20:23.383 18:18:33 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:20:23.383 18:18:33 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:20:23.383 18:18:33 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@889 -- # dd if=/dev/nbd4 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:20:23.383 1+0 records in 00:20:23.383 1+0 records out 00:20:23.383 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000727276 s, 5.6 MB/s 00:20:23.383 18:18:33 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:20:23.383 18:18:33 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@890 -- # size=4096 00:20:23.383 18:18:33 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:20:23.383 18:18:33 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:20:23.383 18:18:33 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@893 -- # return 0 00:20:23.383 18:18:33 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:20:23.383 18:18:33 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 6 )) 00:20:23.383 18:18:33 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk nvme3n1 00:20:23.642 18:18:34 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd5 00:20:23.642 18:18:34 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@30 -- # basename /dev/nbd5 00:20:23.642 18:18:34 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@30 -- # waitfornbd nbd5 00:20:23.642 18:18:34 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@872 -- # local nbd_name=nbd5 00:20:23.642 18:18:34 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@873 -- # local i 00:20:23.642 18:18:34 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:20:23.642 18:18:34 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:20:23.642 18:18:34 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@876 -- # grep -q -w nbd5 /proc/partitions 00:20:23.642 18:18:34 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@877 -- # break 00:20:23.642 18:18:34 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:20:23.642 18:18:34 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:20:23.642 18:18:34 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@889 -- # dd if=/dev/nbd5 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:20:23.642 1+0 records in 00:20:23.642 1+0 records out 00:20:23.642 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000762365 s, 5.4 MB/s 00:20:23.642 18:18:34 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:20:23.642 18:18:34 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@890 -- # size=4096 00:20:23.642 18:18:34 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:20:23.642 18:18:34 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:20:23.642 18:18:34 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@893 -- # return 0 00:20:23.642 18:18:34 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:20:23.642 18:18:34 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 6 )) 00:20:23.642 18:18:34 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@118 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:20:23.903 18:18:34 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@118 -- # nbd_disks_json='[ 00:20:23.903 { 00:20:23.903 "nbd_device": "/dev/nbd0", 00:20:23.903 "bdev_name": "nvme0n1" 00:20:23.903 }, 00:20:23.903 { 00:20:23.903 "nbd_device": "/dev/nbd1", 00:20:23.903 "bdev_name": "nvme0n2" 00:20:23.903 }, 00:20:23.903 { 00:20:23.903 "nbd_device": "/dev/nbd2", 00:20:23.903 "bdev_name": "nvme0n3" 00:20:23.903 }, 00:20:23.903 { 00:20:23.903 "nbd_device": "/dev/nbd3", 00:20:23.903 "bdev_name": "nvme1n1" 00:20:23.903 }, 00:20:23.903 { 00:20:23.903 "nbd_device": "/dev/nbd4", 00:20:23.903 "bdev_name": "nvme2n1" 00:20:23.903 }, 00:20:23.903 { 00:20:23.903 "nbd_device": "/dev/nbd5", 00:20:23.903 "bdev_name": "nvme3n1" 00:20:23.903 } 00:20:23.903 ]' 00:20:23.903 18:18:34 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@119 -- # nbd_disks_name=($(echo "${nbd_disks_json}" | jq -r '.[] | .nbd_device')) 00:20:23.903 18:18:34 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@119 -- # jq -r '.[] | .nbd_device' 00:20:23.903 18:18:34 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@119 -- # echo '[ 00:20:23.903 { 00:20:23.903 "nbd_device": "/dev/nbd0", 00:20:23.903 "bdev_name": "nvme0n1" 00:20:23.903 }, 00:20:23.903 { 00:20:23.903 "nbd_device": "/dev/nbd1", 00:20:23.903 "bdev_name": "nvme0n2" 00:20:23.903 }, 00:20:23.903 { 00:20:23.903 "nbd_device": "/dev/nbd2", 00:20:23.903 "bdev_name": "nvme0n3" 00:20:23.903 }, 00:20:23.903 { 00:20:23.903 "nbd_device": "/dev/nbd3", 00:20:23.903 "bdev_name": "nvme1n1" 00:20:23.903 }, 00:20:23.903 { 00:20:23.903 "nbd_device": "/dev/nbd4", 00:20:23.903 "bdev_name": "nvme2n1" 00:20:23.903 }, 00:20:23.903 { 00:20:23.903 "nbd_device": "/dev/nbd5", 00:20:23.903 "bdev_name": "nvme3n1" 00:20:23.903 } 00:20:23.903 ]' 00:20:23.903 18:18:34 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@120 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1 /dev/nbd2 /dev/nbd3 /dev/nbd4 /dev/nbd5' 00:20:23.903 18:18:34 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:20:23.903 18:18:34 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd2' '/dev/nbd3' '/dev/nbd4' '/dev/nbd5') 00:20:23.903 18:18:34 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@50 -- # local nbd_list 00:20:23.903 18:18:34 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@51 -- # local i 00:20:23.903 18:18:34 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:20:23.903 18:18:34 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:20:23.903 18:18:34 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:20:24.202 18:18:34 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:20:24.202 18:18:34 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:20:24.202 18:18:34 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:20:24.202 18:18:34 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:20:24.202 18:18:34 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:20:24.202 18:18:34 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:20:24.202 18:18:34 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:20:24.202 18:18:34 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:20:24.202 18:18:34 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:20:24.202 18:18:34 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:20:24.202 18:18:34 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:20:24.202 18:18:34 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:20:24.202 18:18:34 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:20:24.202 18:18:34 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:20:24.202 18:18:34 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:20:24.202 18:18:34 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:20:24.202 18:18:34 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:20:24.202 18:18:34 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:20:24.202 18:18:34 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd2 00:20:24.461 18:18:34 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd2 00:20:24.461 18:18:34 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd2 00:20:24.461 18:18:34 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd2 00:20:24.461 18:18:34 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:20:24.461 18:18:34 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:20:24.461 18:18:34 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd2 /proc/partitions 00:20:24.461 18:18:34 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:20:24.461 18:18:34 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:20:24.461 18:18:34 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:20:24.461 18:18:34 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd3 00:20:24.729 18:18:35 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd3 00:20:24.729 18:18:35 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd3 00:20:24.729 18:18:35 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd3 00:20:24.729 18:18:35 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:20:24.729 18:18:35 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:20:24.729 18:18:35 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd3 /proc/partitions 00:20:24.729 18:18:35 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:20:24.729 18:18:35 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:20:24.729 18:18:35 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:20:24.729 18:18:35 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd4 00:20:24.990 18:18:35 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd4 00:20:24.990 18:18:35 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd4 00:20:24.990 18:18:35 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd4 00:20:24.990 18:18:35 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:20:24.990 18:18:35 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:20:24.990 18:18:35 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd4 /proc/partitions 00:20:24.990 18:18:35 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:20:24.990 18:18:35 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:20:24.990 18:18:35 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:20:24.990 18:18:35 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd5 00:20:25.270 18:18:35 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd5 00:20:25.270 18:18:35 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd5 00:20:25.270 18:18:35 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd5 00:20:25.270 18:18:35 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:20:25.270 18:18:35 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:20:25.270 18:18:35 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd5 /proc/partitions 00:20:25.270 18:18:35 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:20:25.270 18:18:35 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:20:25.270 18:18:35 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@122 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:20:25.270 18:18:35 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:20:25.270 18:18:35 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:20:25.270 18:18:35 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:20:25.270 18:18:35 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@64 -- # echo '[]' 00:20:25.270 18:18:35 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:20:25.529 18:18:35 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:20:25.529 18:18:35 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@65 -- # echo '' 00:20:25.529 18:18:35 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:20:25.529 18:18:35 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@65 -- # true 00:20:25.529 18:18:35 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@65 -- # count=0 00:20:25.529 18:18:35 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@66 -- # echo 0 00:20:25.529 18:18:35 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@122 -- # count=0 00:20:25.529 18:18:35 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@123 -- # '[' 0 -ne 0 ']' 00:20:25.529 18:18:35 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@127 -- # return 0 00:20:25.529 18:18:35 blockdev_xnvme.bdev_nbd -- bdev/blockdev.sh@322 -- # nbd_rpc_data_verify /var/tmp/spdk-nbd.sock 'nvme0n1 nvme0n2 nvme0n3 nvme1n1 nvme2n1 nvme3n1' '/dev/nbd0 /dev/nbd1 /dev/nbd10 /dev/nbd11 /dev/nbd12 /dev/nbd13' 00:20:25.529 18:18:35 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@90 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:20:25.529 18:18:35 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@91 -- # bdev_list=('nvme0n1' 'nvme0n2' 'nvme0n3' 'nvme1n1' 'nvme2n1' 'nvme3n1') 00:20:25.529 18:18:35 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@91 -- # local bdev_list 00:20:25.529 18:18:35 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@92 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13') 00:20:25.529 18:18:35 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@92 -- # local nbd_list 00:20:25.529 18:18:35 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@94 -- # nbd_start_disks /var/tmp/spdk-nbd.sock 'nvme0n1 nvme0n2 nvme0n3 nvme1n1 nvme2n1 nvme3n1' '/dev/nbd0 /dev/nbd1 /dev/nbd10 /dev/nbd11 /dev/nbd12 /dev/nbd13' 00:20:25.529 18:18:35 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:20:25.529 18:18:35 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@10 -- # bdev_list=('nvme0n1' 'nvme0n2' 'nvme0n3' 'nvme1n1' 'nvme2n1' 'nvme3n1') 00:20:25.529 18:18:35 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@10 -- # local bdev_list 00:20:25.529 18:18:35 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13') 00:20:25.529 18:18:35 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@11 -- # local nbd_list 00:20:25.529 18:18:35 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@12 -- # local i 00:20:25.529 18:18:35 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:20:25.529 18:18:35 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 6 )) 00:20:25.529 18:18:35 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk nvme0n1 /dev/nbd0 00:20:25.529 /dev/nbd0 00:20:25.788 18:18:36 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:20:25.788 18:18:36 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:20:25.788 18:18:36 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@872 -- # local nbd_name=nbd0 00:20:25.788 18:18:36 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@873 -- # local i 00:20:25.788 18:18:36 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:20:25.788 18:18:36 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:20:25.788 18:18:36 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@876 -- # grep -q -w nbd0 /proc/partitions 00:20:25.788 18:18:36 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@877 -- # break 00:20:25.788 18:18:36 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:20:25.788 18:18:36 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:20:25.788 18:18:36 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@889 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:20:25.788 1+0 records in 00:20:25.788 1+0 records out 00:20:25.788 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000630972 s, 6.5 MB/s 00:20:25.788 18:18:36 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:20:25.788 18:18:36 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@890 -- # size=4096 00:20:25.788 18:18:36 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:20:25.788 18:18:36 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:20:25.788 18:18:36 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@893 -- # return 0 00:20:25.788 18:18:36 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:20:25.788 18:18:36 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 6 )) 00:20:25.788 18:18:36 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk nvme0n2 /dev/nbd1 00:20:25.788 /dev/nbd1 00:20:26.047 18:18:36 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:20:26.047 18:18:36 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:20:26.047 18:18:36 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@872 -- # local nbd_name=nbd1 00:20:26.048 18:18:36 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@873 -- # local i 00:20:26.048 18:18:36 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:20:26.048 18:18:36 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:20:26.048 18:18:36 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@876 -- # grep -q -w nbd1 /proc/partitions 00:20:26.048 18:18:36 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@877 -- # break 00:20:26.048 18:18:36 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:20:26.048 18:18:36 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:20:26.048 18:18:36 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@889 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:20:26.048 1+0 records in 00:20:26.048 1+0 records out 00:20:26.048 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000673886 s, 6.1 MB/s 00:20:26.048 18:18:36 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:20:26.048 18:18:36 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@890 -- # size=4096 00:20:26.048 18:18:36 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:20:26.048 18:18:36 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:20:26.048 18:18:36 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@893 -- # return 0 00:20:26.048 18:18:36 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:20:26.048 18:18:36 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 6 )) 00:20:26.048 18:18:36 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk nvme0n3 /dev/nbd10 00:20:26.307 /dev/nbd10 00:20:26.307 18:18:36 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@17 -- # basename /dev/nbd10 00:20:26.307 18:18:36 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@17 -- # waitfornbd nbd10 00:20:26.307 18:18:36 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@872 -- # local nbd_name=nbd10 00:20:26.307 18:18:36 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@873 -- # local i 00:20:26.307 18:18:36 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:20:26.307 18:18:36 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:20:26.307 18:18:36 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@876 -- # grep -q -w nbd10 /proc/partitions 00:20:26.307 18:18:36 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@877 -- # break 00:20:26.307 18:18:36 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:20:26.307 18:18:36 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:20:26.307 18:18:36 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@889 -- # dd if=/dev/nbd10 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:20:26.307 1+0 records in 00:20:26.307 1+0 records out 00:20:26.307 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000713609 s, 5.7 MB/s 00:20:26.307 18:18:36 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:20:26.307 18:18:36 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@890 -- # size=4096 00:20:26.307 18:18:36 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:20:26.307 18:18:36 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:20:26.307 18:18:36 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@893 -- # return 0 00:20:26.307 18:18:36 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:20:26.307 18:18:36 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 6 )) 00:20:26.307 18:18:36 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk nvme1n1 /dev/nbd11 00:20:26.566 /dev/nbd11 00:20:26.566 18:18:36 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@17 -- # basename /dev/nbd11 00:20:26.566 18:18:36 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@17 -- # waitfornbd nbd11 00:20:26.566 18:18:36 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@872 -- # local nbd_name=nbd11 00:20:26.566 18:18:36 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@873 -- # local i 00:20:26.566 18:18:36 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:20:26.566 18:18:36 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:20:26.566 18:18:36 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@876 -- # grep -q -w nbd11 /proc/partitions 00:20:26.566 18:18:36 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@877 -- # break 00:20:26.566 18:18:36 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:20:26.566 18:18:36 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:20:26.566 18:18:36 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@889 -- # dd if=/dev/nbd11 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:20:26.566 1+0 records in 00:20:26.566 1+0 records out 00:20:26.566 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000861128 s, 4.8 MB/s 00:20:26.566 18:18:36 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:20:26.566 18:18:36 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@890 -- # size=4096 00:20:26.566 18:18:36 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:20:26.566 18:18:36 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:20:26.566 18:18:36 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@893 -- # return 0 00:20:26.566 18:18:36 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:20:26.566 18:18:36 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 6 )) 00:20:26.566 18:18:36 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk nvme2n1 /dev/nbd12 00:20:26.825 /dev/nbd12 00:20:26.825 18:18:37 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@17 -- # basename /dev/nbd12 00:20:26.825 18:18:37 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@17 -- # waitfornbd nbd12 00:20:26.825 18:18:37 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@872 -- # local nbd_name=nbd12 00:20:26.825 18:18:37 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@873 -- # local i 00:20:26.825 18:18:37 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:20:26.825 18:18:37 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:20:26.825 18:18:37 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@876 -- # grep -q -w nbd12 /proc/partitions 00:20:26.825 18:18:37 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@877 -- # break 00:20:26.825 18:18:37 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:20:26.825 18:18:37 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:20:26.825 18:18:37 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@889 -- # dd if=/dev/nbd12 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:20:26.825 1+0 records in 00:20:26.825 1+0 records out 00:20:26.825 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000837275 s, 4.9 MB/s 00:20:26.825 18:18:37 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:20:26.825 18:18:37 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@890 -- # size=4096 00:20:26.825 18:18:37 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:20:26.825 18:18:37 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:20:26.825 18:18:37 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@893 -- # return 0 00:20:26.825 18:18:37 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:20:26.825 18:18:37 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 6 )) 00:20:26.825 18:18:37 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk nvme3n1 /dev/nbd13 00:20:27.086 /dev/nbd13 00:20:27.086 18:18:37 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@17 -- # basename /dev/nbd13 00:20:27.086 18:18:37 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@17 -- # waitfornbd nbd13 00:20:27.086 18:18:37 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@872 -- # local nbd_name=nbd13 00:20:27.086 18:18:37 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@873 -- # local i 00:20:27.086 18:18:37 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:20:27.086 18:18:37 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:20:27.086 18:18:37 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@876 -- # grep -q -w nbd13 /proc/partitions 00:20:27.086 18:18:37 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@877 -- # break 00:20:27.086 18:18:37 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:20:27.086 18:18:37 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:20:27.086 18:18:37 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@889 -- # dd if=/dev/nbd13 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:20:27.086 1+0 records in 00:20:27.086 1+0 records out 00:20:27.086 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000742601 s, 5.5 MB/s 00:20:27.086 18:18:37 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:20:27.086 18:18:37 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@890 -- # size=4096 00:20:27.086 18:18:37 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:20:27.086 18:18:37 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:20:27.086 18:18:37 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@893 -- # return 0 00:20:27.086 18:18:37 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:20:27.086 18:18:37 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 6 )) 00:20:27.086 18:18:37 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@95 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:20:27.086 18:18:37 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:20:27.086 18:18:37 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:20:27.380 18:18:37 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[ 00:20:27.380 { 00:20:27.380 "nbd_device": "/dev/nbd0", 00:20:27.380 "bdev_name": "nvme0n1" 00:20:27.380 }, 00:20:27.380 { 00:20:27.380 "nbd_device": "/dev/nbd1", 00:20:27.380 "bdev_name": "nvme0n2" 00:20:27.380 }, 00:20:27.380 { 00:20:27.380 "nbd_device": "/dev/nbd10", 00:20:27.380 "bdev_name": "nvme0n3" 00:20:27.380 }, 00:20:27.380 { 00:20:27.380 "nbd_device": "/dev/nbd11", 00:20:27.380 "bdev_name": "nvme1n1" 00:20:27.380 }, 00:20:27.380 { 00:20:27.380 "nbd_device": "/dev/nbd12", 00:20:27.380 "bdev_name": "nvme2n1" 00:20:27.380 }, 00:20:27.380 { 00:20:27.380 "nbd_device": "/dev/nbd13", 00:20:27.380 "bdev_name": "nvme3n1" 00:20:27.380 } 00:20:27.380 ]' 00:20:27.380 18:18:37 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@64 -- # echo '[ 00:20:27.380 { 00:20:27.380 "nbd_device": "/dev/nbd0", 00:20:27.380 "bdev_name": "nvme0n1" 00:20:27.380 }, 00:20:27.380 { 00:20:27.380 "nbd_device": "/dev/nbd1", 00:20:27.380 "bdev_name": "nvme0n2" 00:20:27.380 }, 00:20:27.380 { 00:20:27.380 "nbd_device": "/dev/nbd10", 00:20:27.380 "bdev_name": "nvme0n3" 00:20:27.380 }, 00:20:27.380 { 00:20:27.380 "nbd_device": "/dev/nbd11", 00:20:27.380 "bdev_name": "nvme1n1" 00:20:27.380 }, 00:20:27.380 { 00:20:27.380 "nbd_device": "/dev/nbd12", 00:20:27.380 "bdev_name": "nvme2n1" 00:20:27.380 }, 00:20:27.380 { 00:20:27.380 "nbd_device": "/dev/nbd13", 00:20:27.380 "bdev_name": "nvme3n1" 00:20:27.380 } 00:20:27.380 ]' 00:20:27.380 18:18:37 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:20:27.380 18:18:37 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@64 -- # nbd_disks_name='/dev/nbd0 00:20:27.380 /dev/nbd1 00:20:27.380 /dev/nbd10 00:20:27.380 /dev/nbd11 00:20:27.380 /dev/nbd12 00:20:27.380 /dev/nbd13' 00:20:27.381 18:18:37 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@65 -- # echo '/dev/nbd0 00:20:27.381 /dev/nbd1 00:20:27.381 /dev/nbd10 00:20:27.381 /dev/nbd11 00:20:27.381 /dev/nbd12 00:20:27.381 /dev/nbd13' 00:20:27.381 18:18:37 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:20:27.381 18:18:37 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@65 -- # count=6 00:20:27.381 18:18:37 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@66 -- # echo 6 00:20:27.381 18:18:37 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@95 -- # count=6 00:20:27.381 18:18:37 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@96 -- # '[' 6 -ne 6 ']' 00:20:27.381 18:18:37 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@100 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1 /dev/nbd10 /dev/nbd11 /dev/nbd12 /dev/nbd13' write 00:20:27.381 18:18:37 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13') 00:20:27.381 18:18:37 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@70 -- # local nbd_list 00:20:27.381 18:18:37 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@71 -- # local operation=write 00:20:27.381 18:18:37 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest 00:20:27.381 18:18:37 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@74 -- # '[' write = write ']' 00:20:27.381 18:18:37 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@76 -- # dd if=/dev/urandom of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest bs=4096 count=256 00:20:27.381 256+0 records in 00:20:27.381 256+0 records out 00:20:27.381 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00534412 s, 196 MB/s 00:20:27.381 18:18:37 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:20:27.381 18:18:37 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest of=/dev/nbd0 bs=4096 count=256 oflag=direct 00:20:27.381 256+0 records in 00:20:27.381 256+0 records out 00:20:27.381 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.124975 s, 8.4 MB/s 00:20:27.381 18:18:37 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:20:27.381 18:18:37 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest of=/dev/nbd1 bs=4096 count=256 oflag=direct 00:20:27.640 256+0 records in 00:20:27.640 256+0 records out 00:20:27.640 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.126734 s, 8.3 MB/s 00:20:27.640 18:18:38 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:20:27.640 18:18:38 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest of=/dev/nbd10 bs=4096 count=256 oflag=direct 00:20:27.640 256+0 records in 00:20:27.640 256+0 records out 00:20:27.640 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.124529 s, 8.4 MB/s 00:20:27.640 18:18:38 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:20:27.640 18:18:38 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest of=/dev/nbd11 bs=4096 count=256 oflag=direct 00:20:27.899 256+0 records in 00:20:27.899 256+0 records out 00:20:27.899 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.149426 s, 7.0 MB/s 00:20:27.899 18:18:38 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:20:27.899 18:18:38 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest of=/dev/nbd12 bs=4096 count=256 oflag=direct 00:20:27.899 256+0 records in 00:20:27.899 256+0 records out 00:20:27.899 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.130274 s, 8.0 MB/s 00:20:27.899 18:18:38 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:20:27.899 18:18:38 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest of=/dev/nbd13 bs=4096 count=256 oflag=direct 00:20:28.158 256+0 records in 00:20:28.158 256+0 records out 00:20:28.158 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.1261 s, 8.3 MB/s 00:20:28.158 18:18:38 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@101 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1 /dev/nbd10 /dev/nbd11 /dev/nbd12 /dev/nbd13' verify 00:20:28.158 18:18:38 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13') 00:20:28.158 18:18:38 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@70 -- # local nbd_list 00:20:28.158 18:18:38 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@71 -- # local operation=verify 00:20:28.158 18:18:38 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest 00:20:28.158 18:18:38 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@74 -- # '[' verify = write ']' 00:20:28.158 18:18:38 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@80 -- # '[' verify = verify ']' 00:20:28.158 18:18:38 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:20:28.158 18:18:38 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest /dev/nbd0 00:20:28.158 18:18:38 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:20:28.158 18:18:38 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest /dev/nbd1 00:20:28.158 18:18:38 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:20:28.158 18:18:38 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest /dev/nbd10 00:20:28.158 18:18:38 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:20:28.158 18:18:38 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest /dev/nbd11 00:20:28.158 18:18:38 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:20:28.158 18:18:38 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest /dev/nbd12 00:20:28.158 18:18:38 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:20:28.158 18:18:38 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest /dev/nbd13 00:20:28.158 18:18:38 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@85 -- # rm /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest 00:20:28.158 18:18:38 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@103 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1 /dev/nbd10 /dev/nbd11 /dev/nbd12 /dev/nbd13' 00:20:28.158 18:18:38 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:20:28.158 18:18:38 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13') 00:20:28.158 18:18:38 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@50 -- # local nbd_list 00:20:28.158 18:18:38 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@51 -- # local i 00:20:28.158 18:18:38 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:20:28.158 18:18:38 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:20:28.417 18:18:38 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:20:28.417 18:18:38 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:20:28.417 18:18:38 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:20:28.417 18:18:38 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:20:28.417 18:18:38 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:20:28.417 18:18:38 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:20:28.417 18:18:38 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:20:28.417 18:18:38 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:20:28.417 18:18:38 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:20:28.417 18:18:38 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:20:28.675 18:18:39 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:20:28.675 18:18:39 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:20:28.675 18:18:39 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:20:28.675 18:18:39 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:20:28.675 18:18:39 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:20:28.675 18:18:39 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:20:28.675 18:18:39 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:20:28.675 18:18:39 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:20:28.675 18:18:39 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:20:28.675 18:18:39 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd10 00:20:28.934 18:18:39 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd10 00:20:28.934 18:18:39 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd10 00:20:28.934 18:18:39 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd10 00:20:28.934 18:18:39 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:20:28.934 18:18:39 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:20:28.934 18:18:39 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd10 /proc/partitions 00:20:28.934 18:18:39 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:20:28.934 18:18:39 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:20:28.934 18:18:39 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:20:28.934 18:18:39 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd11 00:20:29.193 18:18:39 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd11 00:20:29.193 18:18:39 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd11 00:20:29.193 18:18:39 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd11 00:20:29.193 18:18:39 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:20:29.193 18:18:39 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:20:29.193 18:18:39 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd11 /proc/partitions 00:20:29.193 18:18:39 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:20:29.193 18:18:39 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:20:29.193 18:18:39 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:20:29.193 18:18:39 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd12 00:20:29.452 18:18:39 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd12 00:20:29.452 18:18:39 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd12 00:20:29.452 18:18:39 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd12 00:20:29.452 18:18:39 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:20:29.452 18:18:39 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:20:29.452 18:18:39 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd12 /proc/partitions 00:20:29.452 18:18:39 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:20:29.452 18:18:39 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:20:29.452 18:18:39 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:20:29.452 18:18:39 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd13 00:20:29.711 18:18:40 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd13 00:20:29.711 18:18:40 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd13 00:20:29.711 18:18:40 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd13 00:20:29.711 18:18:40 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:20:29.711 18:18:40 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:20:29.711 18:18:40 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd13 /proc/partitions 00:20:29.711 18:18:40 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:20:29.711 18:18:40 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:20:29.711 18:18:40 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@104 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:20:29.711 18:18:40 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:20:29.711 18:18:40 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:20:29.970 18:18:40 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:20:29.970 18:18:40 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@64 -- # echo '[]' 00:20:29.970 18:18:40 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:20:29.970 18:18:40 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:20:29.970 18:18:40 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:20:29.970 18:18:40 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@65 -- # echo '' 00:20:29.970 18:18:40 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@65 -- # true 00:20:29.970 18:18:40 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@65 -- # count=0 00:20:29.970 18:18:40 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@66 -- # echo 0 00:20:29.970 18:18:40 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@104 -- # count=0 00:20:29.970 18:18:40 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@105 -- # '[' 0 -ne 0 ']' 00:20:29.970 18:18:40 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@109 -- # return 0 00:20:29.970 18:18:40 blockdev_xnvme.bdev_nbd -- bdev/blockdev.sh@323 -- # nbd_with_lvol_verify /var/tmp/spdk-nbd.sock /dev/nbd0 00:20:29.970 18:18:40 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@131 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:20:29.970 18:18:40 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@132 -- # local nbd=/dev/nbd0 00:20:29.970 18:18:40 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@134 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create -b malloc_lvol_verify 16 512 00:20:30.229 malloc_lvol_verify 00:20:30.229 18:18:40 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@135 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_lvol_create_lvstore malloc_lvol_verify lvs 00:20:30.229 a739938c-6922-4726-8c40-48f547ac6c80 00:20:30.488 18:18:40 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@136 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_lvol_create lvol 4 -l lvs 00:20:30.488 0c8f7121-a5c7-4ac5-92ca-4b9bced63fab 00:20:30.488 18:18:41 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@137 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk lvs/lvol /dev/nbd0 00:20:30.747 /dev/nbd0 00:20:30.747 18:18:41 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@139 -- # wait_for_nbd_set_capacity /dev/nbd0 00:20:30.747 18:18:41 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@146 -- # local nbd=nbd0 00:20:30.747 18:18:41 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@148 -- # [[ -e /sys/block/nbd0/size ]] 00:20:30.747 18:18:41 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@150 -- # (( 8192 == 0 )) 00:20:30.747 18:18:41 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@141 -- # mkfs.ext4 /dev/nbd0 00:20:30.747 mke2fs 1.47.0 (5-Feb-2023) 00:20:30.747 Discarding device blocks: 0/4096 done 00:20:30.747 Creating filesystem with 4096 1k blocks and 1024 inodes 00:20:30.747 00:20:30.747 Allocating group tables: 0/1 done 00:20:30.747 Writing inode tables: 0/1 done 00:20:30.747 Creating journal (1024 blocks): done 00:20:30.747 Writing superblocks and filesystem accounting information: 0/1 done 00:20:30.747 00:20:30.747 18:18:41 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@142 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock /dev/nbd0 00:20:30.747 18:18:41 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:20:30.747 18:18:41 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0') 00:20:30.747 18:18:41 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@50 -- # local nbd_list 00:20:30.747 18:18:41 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@51 -- # local i 00:20:30.747 18:18:41 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:20:30.747 18:18:41 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:20:31.006 18:18:41 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:20:31.006 18:18:41 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:20:31.006 18:18:41 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:20:31.006 18:18:41 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:20:31.006 18:18:41 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:20:31.006 18:18:41 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:20:31.006 18:18:41 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:20:31.006 18:18:41 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:20:31.006 18:18:41 blockdev_xnvme.bdev_nbd -- bdev/blockdev.sh@325 -- # killprocess 73993 00:20:31.006 18:18:41 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@954 -- # '[' -z 73993 ']' 00:20:31.006 18:18:41 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@958 -- # kill -0 73993 00:20:31.006 18:18:41 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@959 -- # uname 00:20:31.006 18:18:41 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:20:31.007 18:18:41 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 73993 00:20:31.007 killing process with pid 73993 00:20:31.007 18:18:41 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:20:31.007 18:18:41 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:20:31.007 18:18:41 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@972 -- # echo 'killing process with pid 73993' 00:20:31.007 18:18:41 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@973 -- # kill 73993 00:20:31.007 18:18:41 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@978 -- # wait 73993 00:20:32.387 18:18:42 blockdev_xnvme.bdev_nbd -- bdev/blockdev.sh@326 -- # trap - SIGINT SIGTERM EXIT 00:20:32.387 00:20:32.387 real 0m11.136s 00:20:32.387 user 0m14.415s 00:20:32.387 sys 0m4.638s 00:20:32.387 18:18:42 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@1130 -- # xtrace_disable 00:20:32.387 18:18:42 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@10 -- # set +x 00:20:32.387 ************************************ 00:20:32.387 END TEST bdev_nbd 00:20:32.387 ************************************ 00:20:32.387 18:18:42 blockdev_xnvme -- bdev/blockdev.sh@800 -- # [[ y == y ]] 00:20:32.387 18:18:42 blockdev_xnvme -- bdev/blockdev.sh@801 -- # '[' xnvme = nvme ']' 00:20:32.387 18:18:42 blockdev_xnvme -- bdev/blockdev.sh@801 -- # '[' xnvme = gpt ']' 00:20:32.387 18:18:42 blockdev_xnvme -- bdev/blockdev.sh@805 -- # run_test bdev_fio fio_test_suite '' 00:20:32.387 18:18:42 blockdev_xnvme -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:20:32.387 18:18:42 blockdev_xnvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:20:32.387 18:18:42 blockdev_xnvme -- common/autotest_common.sh@10 -- # set +x 00:20:32.388 ************************************ 00:20:32.388 START TEST bdev_fio 00:20:32.388 ************************************ 00:20:32.388 /home/vagrant/spdk_repo/spdk/test/bdev /home/vagrant/spdk_repo/spdk 00:20:32.388 18:18:42 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1129 -- # fio_test_suite '' 00:20:32.388 18:18:42 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@330 -- # local env_context 00:20:32.388 18:18:42 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@334 -- # pushd /home/vagrant/spdk_repo/spdk/test/bdev 00:20:32.388 18:18:42 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@335 -- # trap 'rm -f ./*.state; popd; exit 1' SIGINT SIGTERM EXIT 00:20:32.388 18:18:42 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@338 -- # echo '' 00:20:32.388 18:18:42 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@338 -- # sed s/--env-context=// 00:20:32.388 18:18:42 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@338 -- # env_context= 00:20:32.388 18:18:42 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@339 -- # fio_config_gen /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio verify AIO '' 00:20:32.388 18:18:42 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1284 -- # local config_file=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio 00:20:32.388 18:18:42 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1285 -- # local workload=verify 00:20:32.388 18:18:42 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1286 -- # local bdev_type=AIO 00:20:32.388 18:18:42 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1287 -- # local env_context= 00:20:32.388 18:18:42 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1288 -- # local fio_dir=/usr/src/fio 00:20:32.388 18:18:42 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1290 -- # '[' -e /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio ']' 00:20:32.388 18:18:42 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1295 -- # '[' -z verify ']' 00:20:32.388 18:18:42 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1299 -- # '[' -n '' ']' 00:20:32.388 18:18:42 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1303 -- # touch /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio 00:20:32.388 18:18:42 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1305 -- # cat 00:20:32.388 18:18:42 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1317 -- # '[' verify == verify ']' 00:20:32.388 18:18:42 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1318 -- # cat 00:20:32.388 18:18:42 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1327 -- # '[' AIO == AIO ']' 00:20:32.388 18:18:42 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1328 -- # /usr/src/fio/fio --version 00:20:32.388 18:18:42 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1328 -- # [[ fio-3.35 == *\f\i\o\-\3* ]] 00:20:32.388 18:18:42 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1329 -- # echo serialize_overlap=1 00:20:32.388 18:18:42 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@340 -- # for b in "${bdevs_name[@]}" 00:20:32.388 18:18:42 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@341 -- # echo '[job_nvme0n1]' 00:20:32.388 18:18:42 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@342 -- # echo filename=nvme0n1 00:20:32.388 18:18:42 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@340 -- # for b in "${bdevs_name[@]}" 00:20:32.388 18:18:42 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@341 -- # echo '[job_nvme0n2]' 00:20:32.388 18:18:42 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@342 -- # echo filename=nvme0n2 00:20:32.388 18:18:42 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@340 -- # for b in "${bdevs_name[@]}" 00:20:32.388 18:18:42 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@341 -- # echo '[job_nvme0n3]' 00:20:32.388 18:18:42 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@342 -- # echo filename=nvme0n3 00:20:32.388 18:18:42 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@340 -- # for b in "${bdevs_name[@]}" 00:20:32.388 18:18:42 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@341 -- # echo '[job_nvme1n1]' 00:20:32.388 18:18:42 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@342 -- # echo filename=nvme1n1 00:20:32.388 18:18:42 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@340 -- # for b in "${bdevs_name[@]}" 00:20:32.388 18:18:42 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@341 -- # echo '[job_nvme2n1]' 00:20:32.388 18:18:42 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@342 -- # echo filename=nvme2n1 00:20:32.388 18:18:42 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@340 -- # for b in "${bdevs_name[@]}" 00:20:32.388 18:18:42 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@341 -- # echo '[job_nvme3n1]' 00:20:32.388 18:18:42 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@342 -- # echo filename=nvme3n1 00:20:32.388 18:18:42 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@346 -- # local 'fio_params=--ioengine=spdk_bdev --iodepth=8 --bs=4k --runtime=10 /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio --verify_state_save=0 --spdk_json_conf=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.json' 00:20:32.388 18:18:42 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@348 -- # run_test bdev_fio_rw_verify fio_bdev --ioengine=spdk_bdev --iodepth=8 --bs=4k --runtime=10 /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio --verify_state_save=0 --spdk_json_conf=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.json --spdk_mem=0 --aux-path=/home/vagrant/spdk_repo/spdk/../output 00:20:32.388 18:18:42 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1105 -- # '[' 11 -le 1 ']' 00:20:32.388 18:18:42 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1111 -- # xtrace_disable 00:20:32.388 18:18:42 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@10 -- # set +x 00:20:32.388 ************************************ 00:20:32.388 START TEST bdev_fio_rw_verify 00:20:32.388 ************************************ 00:20:32.388 18:18:42 blockdev_xnvme.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1129 -- # fio_bdev --ioengine=spdk_bdev --iodepth=8 --bs=4k --runtime=10 /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio --verify_state_save=0 --spdk_json_conf=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.json --spdk_mem=0 --aux-path=/home/vagrant/spdk_repo/spdk/../output 00:20:32.388 18:18:42 blockdev_xnvme.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1360 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --iodepth=8 --bs=4k --runtime=10 /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio --verify_state_save=0 --spdk_json_conf=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.json --spdk_mem=0 --aux-path=/home/vagrant/spdk_repo/spdk/../output 00:20:32.388 18:18:42 blockdev_xnvme.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1341 -- # local fio_dir=/usr/src/fio 00:20:32.388 18:18:42 blockdev_xnvme.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1343 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:20:32.388 18:18:42 blockdev_xnvme.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1343 -- # local sanitizers 00:20:32.388 18:18:42 blockdev_xnvme.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1344 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:20:32.388 18:18:42 blockdev_xnvme.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1345 -- # shift 00:20:32.388 18:18:42 blockdev_xnvme.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1347 -- # local asan_lib= 00:20:32.388 18:18:42 blockdev_xnvme.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 00:20:32.388 18:18:42 blockdev_xnvme.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1349 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:20:32.388 18:18:42 blockdev_xnvme.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1349 -- # awk '{print $3}' 00:20:32.388 18:18:42 blockdev_xnvme.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1349 -- # grep libasan 00:20:32.388 18:18:42 blockdev_xnvme.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1349 -- # asan_lib=/usr/lib64/libasan.so.8 00:20:32.388 18:18:42 blockdev_xnvme.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1350 -- # [[ -n /usr/lib64/libasan.so.8 ]] 00:20:32.388 18:18:42 blockdev_xnvme.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1351 -- # break 00:20:32.388 18:18:42 blockdev_xnvme.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1356 -- # LD_PRELOAD='/usr/lib64/libasan.so.8 /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev' 00:20:32.388 18:18:42 blockdev_xnvme.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1356 -- # /usr/src/fio/fio --ioengine=spdk_bdev --iodepth=8 --bs=4k --runtime=10 /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio --verify_state_save=0 --spdk_json_conf=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.json --spdk_mem=0 --aux-path=/home/vagrant/spdk_repo/spdk/../output 00:20:32.648 job_nvme0n1: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=8 00:20:32.648 job_nvme0n2: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=8 00:20:32.648 job_nvme0n3: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=8 00:20:32.648 job_nvme1n1: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=8 00:20:32.648 job_nvme2n1: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=8 00:20:32.648 job_nvme3n1: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=8 00:20:32.648 fio-3.35 00:20:32.648 Starting 6 threads 00:20:44.931 00:20:44.931 job_nvme0n1: (groupid=0, jobs=6): err= 0: pid=74405: Fri Dec 6 18:18:53 2024 00:20:44.931 read: IOPS=33.1k, BW=129MiB/s (136MB/s)(1294MiB/10001msec) 00:20:44.931 slat (usec): min=2, max=1152, avg= 6.11, stdev= 4.86 00:20:44.931 clat (usec): min=89, max=5285, avg=565.45, stdev=215.90 00:20:44.931 lat (usec): min=102, max=5296, avg=571.57, stdev=216.61 00:20:44.931 clat percentiles (usec): 00:20:44.931 | 50.000th=[ 570], 99.000th=[ 1188], 99.900th=[ 1926], 99.990th=[ 3589], 00:20:44.931 | 99.999th=[ 5276] 00:20:44.931 write: IOPS=33.4k, BW=131MiB/s (137MB/s)(1306MiB/10001msec); 0 zone resets 00:20:44.931 slat (usec): min=11, max=8162, avg=23.16, stdev=34.20 00:20:44.931 clat (usec): min=81, max=9030, avg=645.45, stdev=252.43 00:20:44.931 lat (usec): min=95, max=9054, avg=668.61, stdev=257.53 00:20:44.931 clat percentiles (usec): 00:20:44.931 | 50.000th=[ 635], 99.000th=[ 1467], 99.900th=[ 2376], 99.990th=[ 3392], 00:20:44.931 | 99.999th=[ 8848] 00:20:44.931 bw ( KiB/s): min=105368, max=153304, per=99.88%, avg=133548.16, stdev=2110.57, samples=114 00:20:44.931 iops : min=26342, max=38326, avg=33386.68, stdev=527.66, samples=114 00:20:44.931 lat (usec) : 100=0.01%, 250=4.80%, 500=26.08%, 750=49.95%, 1000=14.55% 00:20:44.931 lat (msec) : 2=4.47%, 4=0.16%, 10=0.01% 00:20:44.931 cpu : usr=58.76%, sys=27.40%, ctx=8769, majf=0, minf=27505 00:20:44.931 IO depths : 1=12.0%, 2=24.5%, 4=50.5%, 8=13.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:20:44.931 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:20:44.931 complete : 0=0.0%, 4=89.0%, 8=11.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:20:44.931 issued rwts: total=331349,334293,0,0 short=0,0,0,0 dropped=0,0,0,0 00:20:44.931 latency : target=0, window=0, percentile=100.00%, depth=8 00:20:44.931 00:20:44.931 Run status group 0 (all jobs): 00:20:44.931 READ: bw=129MiB/s (136MB/s), 129MiB/s-129MiB/s (136MB/s-136MB/s), io=1294MiB (1357MB), run=10001-10001msec 00:20:44.931 WRITE: bw=131MiB/s (137MB/s), 131MiB/s-131MiB/s (137MB/s-137MB/s), io=1306MiB (1369MB), run=10001-10001msec 00:20:44.931 ----------------------------------------------------- 00:20:44.931 Suppressions used: 00:20:44.931 count bytes template 00:20:44.931 6 48 /usr/src/fio/parse.c 00:20:44.931 2684 257664 /usr/src/fio/iolog.c 00:20:44.931 1 8 libtcmalloc_minimal.so 00:20:44.931 1 904 libcrypto.so 00:20:44.931 ----------------------------------------------------- 00:20:44.931 00:20:44.931 00:20:44.931 real 0m12.481s 00:20:44.931 user 0m37.223s 00:20:44.931 sys 0m16.870s 00:20:44.931 ************************************ 00:20:44.931 END TEST bdev_fio_rw_verify 00:20:44.931 ************************************ 00:20:44.931 18:18:55 blockdev_xnvme.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1130 -- # xtrace_disable 00:20:44.931 18:18:55 blockdev_xnvme.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@10 -- # set +x 00:20:44.931 18:18:55 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@349 -- # rm -f 00:20:44.931 18:18:55 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@350 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio 00:20:44.931 18:18:55 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@353 -- # fio_config_gen /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio trim '' '' 00:20:44.931 18:18:55 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1284 -- # local config_file=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio 00:20:44.931 18:18:55 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1285 -- # local workload=trim 00:20:44.931 18:18:55 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1286 -- # local bdev_type= 00:20:44.931 18:18:55 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1287 -- # local env_context= 00:20:44.931 18:18:55 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1288 -- # local fio_dir=/usr/src/fio 00:20:44.931 18:18:55 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1290 -- # '[' -e /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio ']' 00:20:44.931 18:18:55 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1295 -- # '[' -z trim ']' 00:20:44.931 18:18:55 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1299 -- # '[' -n '' ']' 00:20:44.931 18:18:55 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1303 -- # touch /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio 00:20:44.931 18:18:55 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1305 -- # cat 00:20:44.931 18:18:55 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1317 -- # '[' trim == verify ']' 00:20:44.931 18:18:55 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1332 -- # '[' trim == trim ']' 00:20:44.931 18:18:55 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1333 -- # echo rw=trimwrite 00:20:44.931 18:18:55 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@354 -- # jq -r 'select(.supported_io_types.unmap == true) | .name' 00:20:44.931 18:18:55 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@354 -- # printf '%s\n' '{' ' "name": "nvme0n1",' ' "aliases": [' ' "b6796373-74a1-4cb3-b3b0-9350b52f0ea9"' ' ],' ' "product_name": "xNVMe bdev",' ' "block_size": 4096,' ' "num_blocks": 1048576,' ' "uuid": "b6796373-74a1-4cb3-b3b0-9350b52f0ea9",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": false,' ' "flush": false,' ' "reset": false,' ' "nvme_admin": false,' ' "nvme_io": false,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": false,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": false,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {}' '}' '{' ' "name": "nvme0n2",' ' "aliases": [' ' "80e9df41-eef8-490e-a1b6-12781870be77"' ' ],' ' "product_name": "xNVMe bdev",' ' "block_size": 4096,' ' "num_blocks": 1048576,' ' "uuid": "80e9df41-eef8-490e-a1b6-12781870be77",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": false,' ' "flush": false,' ' "reset": false,' ' "nvme_admin": false,' ' "nvme_io": false,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": false,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": false,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {}' '}' '{' ' "name": "nvme0n3",' ' "aliases": [' ' "533f7a2a-b5d1-4152-8bd2-40e5d0d183da"' ' ],' ' "product_name": "xNVMe bdev",' ' "block_size": 4096,' ' "num_blocks": 1048576,' ' "uuid": "533f7a2a-b5d1-4152-8bd2-40e5d0d183da",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": false,' ' "flush": false,' ' "reset": false,' ' "nvme_admin": false,' ' "nvme_io": false,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": false,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": false,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {}' '}' '{' ' "name": "nvme1n1",' ' "aliases": [' ' "15da645c-8a2e-4caa-a2a9-eae98da97aa5"' ' ],' ' "product_name": "xNVMe bdev",' ' "block_size": 4096,' ' "num_blocks": 1548666,' ' "uuid": "15da645c-8a2e-4caa-a2a9-eae98da97aa5",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": false,' ' "flush": false,' ' "reset": false,' ' "nvme_admin": false,' ' "nvme_io": false,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": false,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": false,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {}' '}' '{' ' "name": "nvme2n1",' ' "aliases": [' ' "9b88be68-2d87-4199-a023-e04940bfdb8f"' ' ],' ' "product_name": "xNVMe bdev",' ' "block_size": 4096,' ' "num_blocks": 262144,' ' "uuid": "9b88be68-2d87-4199-a023-e04940bfdb8f",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": false,' ' "flush": false,' ' "reset": false,' ' "nvme_admin": false,' ' "nvme_io": false,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": false,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": false,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {}' '}' '{' ' "name": "nvme3n1",' ' "aliases": [' ' "1931a7b9-2658-4f5a-a017-20deec00883a"' ' ],' ' "product_name": "xNVMe bdev",' ' "block_size": 4096,' ' "num_blocks": 1310720,' ' "uuid": "1931a7b9-2658-4f5a-a017-20deec00883a",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": false,' ' "flush": false,' ' "reset": false,' ' "nvme_admin": false,' ' "nvme_io": false,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": false,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": false,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {}' '}' 00:20:44.931 18:18:55 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@354 -- # [[ -n '' ]] 00:20:44.931 18:18:55 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@360 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio 00:20:44.931 /home/vagrant/spdk_repo/spdk 00:20:44.931 18:18:55 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@361 -- # popd 00:20:44.931 18:18:55 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@362 -- # trap - SIGINT SIGTERM EXIT 00:20:44.931 18:18:55 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@363 -- # return 0 00:20:44.931 00:20:44.931 real 0m12.725s 00:20:44.931 user 0m37.334s 00:20:44.931 sys 0m17.003s 00:20:44.931 18:18:55 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1130 -- # xtrace_disable 00:20:44.931 ************************************ 00:20:44.931 18:18:55 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@10 -- # set +x 00:20:44.931 END TEST bdev_fio 00:20:44.931 ************************************ 00:20:45.190 18:18:55 blockdev_xnvme -- bdev/blockdev.sh@812 -- # trap cleanup SIGINT SIGTERM EXIT 00:20:45.190 18:18:55 blockdev_xnvme -- bdev/blockdev.sh@814 -- # run_test bdev_verify /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 4096 -w verify -t 5 -C -m 0x3 '' 00:20:45.190 18:18:55 blockdev_xnvme -- common/autotest_common.sh@1105 -- # '[' 16 -le 1 ']' 00:20:45.190 18:18:55 blockdev_xnvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:20:45.190 18:18:55 blockdev_xnvme -- common/autotest_common.sh@10 -- # set +x 00:20:45.190 ************************************ 00:20:45.190 START TEST bdev_verify 00:20:45.190 ************************************ 00:20:45.190 18:18:55 blockdev_xnvme.bdev_verify -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 4096 -w verify -t 5 -C -m 0x3 '' 00:20:45.190 [2024-12-06 18:18:55.665180] Starting SPDK v25.01-pre git sha1 1148849d6 / DPDK 24.03.0 initialization... 00:20:45.190 [2024-12-06 18:18:55.665317] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid74582 ] 00:20:45.448 [2024-12-06 18:18:55.847992] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:20:45.448 [2024-12-06 18:18:55.968300] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:20:45.448 [2024-12-06 18:18:55.968365] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:20:46.014 Running I/O for 5 seconds... 00:20:48.327 27424.00 IOPS, 107.12 MiB/s [2024-12-06T18:18:59.838Z] 25824.00 IOPS, 100.88 MiB/s [2024-12-06T18:19:00.775Z] 25034.67 IOPS, 97.79 MiB/s [2024-12-06T18:19:01.776Z] 24824.00 IOPS, 96.97 MiB/s [2024-12-06T18:19:01.776Z] 24608.00 IOPS, 96.12 MiB/s 00:20:51.200 Latency(us) 00:20:51.200 [2024-12-06T18:19:01.776Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:20:51.200 Job: nvme0n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:20:51.200 Verification LBA range: start 0x0 length 0x80000 00:20:51.200 nvme0n1 : 5.05 1899.66 7.42 0.00 0.00 67264.82 7737.99 67799.49 00:20:51.200 Job: nvme0n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:20:51.200 Verification LBA range: start 0x80000 length 0x80000 00:20:51.200 nvme0n1 : 5.05 1850.91 7.23 0.00 0.00 69041.54 9843.56 68220.61 00:20:51.200 Job: nvme0n2 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:20:51.200 Verification LBA range: start 0x0 length 0x80000 00:20:51.200 nvme0n2 : 5.06 1896.35 7.41 0.00 0.00 67295.64 12844.00 59798.31 00:20:51.200 Job: nvme0n2 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:20:51.200 Verification LBA range: start 0x80000 length 0x80000 00:20:51.200 nvme0n2 : 5.06 1847.86 7.22 0.00 0.00 69052.11 9896.20 69483.95 00:20:51.200 Job: nvme0n3 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:20:51.200 Verification LBA range: start 0x0 length 0x80000 00:20:51.200 nvme0n3 : 5.05 1902.29 7.43 0.00 0.00 66992.84 9843.56 76642.90 00:20:51.200 Job: nvme0n3 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:20:51.200 Verification LBA range: start 0x80000 length 0x80000 00:20:51.200 nvme0n3 : 5.07 1842.08 7.20 0.00 0.00 69175.64 11738.58 72010.64 00:20:51.201 Job: nvme1n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:20:51.201 Verification LBA range: start 0x0 length 0xbd0bd 00:20:51.201 nvme1n1 : 5.06 2841.40 11.10 0.00 0.00 44712.21 4316.43 52428.80 00:20:51.201 Job: nvme1n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:20:51.201 Verification LBA range: start 0xbd0bd length 0xbd0bd 00:20:51.201 nvme1n1 : 5.07 2850.90 11.14 0.00 0.00 44583.91 5263.94 56850.51 00:20:51.201 Job: nvme2n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:20:51.201 Verification LBA range: start 0x0 length 0x20000 00:20:51.201 nvme2n1 : 5.06 1895.52 7.40 0.00 0.00 67064.47 9843.56 68641.72 00:20:51.201 Job: nvme2n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:20:51.201 Verification LBA range: start 0x20000 length 0x20000 00:20:51.201 nvme2n1 : 5.07 1843.45 7.20 0.00 0.00 68770.80 10896.35 63588.34 00:20:51.201 Job: nvme3n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:20:51.201 Verification LBA range: start 0x0 length 0xa0000 00:20:51.201 nvme3n1 : 5.06 1896.96 7.41 0.00 0.00 66874.20 3790.03 74116.22 00:20:51.201 Job: nvme3n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:20:51.201 Verification LBA range: start 0xa0000 length 0xa0000 00:20:51.201 nvme3n1 : 5.07 1841.29 7.19 0.00 0.00 68797.34 7948.54 64851.69 00:20:51.201 [2024-12-06T18:19:01.777Z] =================================================================================================================== 00:20:51.201 [2024-12-06T18:19:01.777Z] Total : 24408.66 95.35 0.00 0.00 62566.13 3790.03 76642.90 00:20:52.578 00:20:52.578 real 0m7.158s 00:20:52.578 user 0m10.803s 00:20:52.578 sys 0m2.188s 00:20:52.578 18:19:02 blockdev_xnvme.bdev_verify -- common/autotest_common.sh@1130 -- # xtrace_disable 00:20:52.578 ************************************ 00:20:52.578 END TEST bdev_verify 00:20:52.578 ************************************ 00:20:52.578 18:19:02 blockdev_xnvme.bdev_verify -- common/autotest_common.sh@10 -- # set +x 00:20:52.578 18:19:02 blockdev_xnvme -- bdev/blockdev.sh@815 -- # run_test bdev_verify_big_io /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 65536 -w verify -t 5 -C -m 0x3 '' 00:20:52.578 18:19:02 blockdev_xnvme -- common/autotest_common.sh@1105 -- # '[' 16 -le 1 ']' 00:20:52.578 18:19:02 blockdev_xnvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:20:52.578 18:19:02 blockdev_xnvme -- common/autotest_common.sh@10 -- # set +x 00:20:52.578 ************************************ 00:20:52.578 START TEST bdev_verify_big_io 00:20:52.578 ************************************ 00:20:52.578 18:19:02 blockdev_xnvme.bdev_verify_big_io -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 65536 -w verify -t 5 -C -m 0x3 '' 00:20:52.578 [2024-12-06 18:19:02.896244] Starting SPDK v25.01-pre git sha1 1148849d6 / DPDK 24.03.0 initialization... 00:20:52.578 [2024-12-06 18:19:02.896402] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid74682 ] 00:20:52.578 [2024-12-06 18:19:03.077206] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:20:52.838 [2024-12-06 18:19:03.193373] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:20:52.838 [2024-12-06 18:19:03.193403] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:20:53.406 Running I/O for 5 seconds... 00:20:58.473 1792.00 IOPS, 112.00 MiB/s [2024-12-06T18:19:09.618Z] 3299.00 IOPS, 206.19 MiB/s [2024-12-06T18:19:09.618Z] 3663.67 IOPS, 228.98 MiB/s 00:20:59.042 Latency(us) 00:20:59.042 [2024-12-06T18:19:09.618Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:20:59.042 Job: nvme0n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 65536) 00:20:59.042 Verification LBA range: start 0x0 length 0x8000 00:20:59.042 nvme0n1 : 5.63 122.13 7.63 0.00 0.00 1026943.52 42743.16 2021351.33 00:20:59.042 Job: nvme0n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 65536) 00:20:59.042 Verification LBA range: start 0x8000 length 0x8000 00:20:59.042 nvme0n1 : 5.70 151.51 9.47 0.00 0.00 818196.20 86328.55 1017413.50 00:20:59.042 Job: nvme0n2 (Core Mask 0x1, workload: verify, depth: 128, IO size: 65536) 00:20:59.042 Verification LBA range: start 0x0 length 0x8000 00:20:59.042 nvme0n2 : 5.64 153.32 9.58 0.00 0.00 796699.68 4632.26 882656.75 00:20:59.042 Job: nvme0n2 (Core Mask 0x2, workload: verify, depth: 128, IO size: 65536) 00:20:59.042 Verification LBA range: start 0x8000 length 0x8000 00:20:59.042 nvme0n2 : 5.71 145.77 9.11 0.00 0.00 841014.21 66536.15 1057840.53 00:20:59.042 Job: nvme0n3 (Core Mask 0x1, workload: verify, depth: 128, IO size: 65536) 00:20:59.042 Verification LBA range: start 0x0 length 0x8000 00:20:59.042 nvme0n3 : 5.70 157.19 9.82 0.00 0.00 760743.05 74537.33 1623818.90 00:20:59.042 Job: nvme0n3 (Core Mask 0x2, workload: verify, depth: 128, IO size: 65536) 00:20:59.042 Verification LBA range: start 0x8000 length 0x8000 00:20:59.042 nvme0n3 : 5.74 150.27 9.39 0.00 0.00 781671.74 96014.19 1468848.63 00:20:59.042 Job: nvme1n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 65536) 00:20:59.042 Verification LBA range: start 0x0 length 0xbd0b 00:20:59.042 nvme1n1 : 5.70 193.65 12.10 0.00 0.00 597091.09 61482.77 788327.02 00:20:59.042 Job: nvme1n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 65536) 00:20:59.042 Verification LBA range: start 0xbd0b length 0xbd0b 00:20:59.042 nvme1n1 : 5.75 222.65 13.92 0.00 0.00 524443.35 8580.22 828754.04 00:20:59.042 Job: nvme2n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 65536) 00:20:59.042 Verification LBA range: start 0x0 length 0x2000 00:20:59.042 nvme2n1 : 5.70 168.33 10.52 0.00 0.00 677390.67 69905.07 923083.77 00:20:59.042 Job: nvme2n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 65536) 00:20:59.042 Verification LBA range: start 0x2000 length 0x2000 00:20:59.042 nvme2n1 : 5.74 149.02 9.31 0.00 0.00 761569.00 28635.81 1691197.28 00:20:59.042 Job: nvme3n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 65536) 00:20:59.042 Verification LBA range: start 0x0 length 0xa000 00:20:59.042 nvme3n1 : 5.71 179.18 11.20 0.00 0.00 624859.45 4395.39 801802.69 00:20:59.042 Job: nvme3n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 65536) 00:20:59.042 Verification LBA range: start 0xa000 length 0xa000 00:20:59.042 nvme3n1 : 5.76 205.59 12.85 0.00 0.00 539270.17 4658.58 815278.37 00:20:59.042 [2024-12-06T18:19:09.618Z] =================================================================================================================== 00:20:59.042 [2024-12-06T18:19:09.618Z] Total : 1998.60 124.91 0.00 0.00 706849.83 4395.39 2021351.33 00:21:00.422 00:21:00.422 real 0m8.163s 00:21:00.422 user 0m14.812s 00:21:00.422 sys 0m0.571s 00:21:00.422 18:19:10 blockdev_xnvme.bdev_verify_big_io -- common/autotest_common.sh@1130 -- # xtrace_disable 00:21:00.422 ************************************ 00:21:00.422 END TEST bdev_verify_big_io 00:21:00.422 18:19:10 blockdev_xnvme.bdev_verify_big_io -- common/autotest_common.sh@10 -- # set +x 00:21:00.422 ************************************ 00:21:00.681 18:19:11 blockdev_xnvme -- bdev/blockdev.sh@816 -- # run_test bdev_write_zeroes /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:21:00.681 18:19:11 blockdev_xnvme -- common/autotest_common.sh@1105 -- # '[' 13 -le 1 ']' 00:21:00.681 18:19:11 blockdev_xnvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:21:00.681 18:19:11 blockdev_xnvme -- common/autotest_common.sh@10 -- # set +x 00:21:00.681 ************************************ 00:21:00.681 START TEST bdev_write_zeroes 00:21:00.681 ************************************ 00:21:00.681 18:19:11 blockdev_xnvme.bdev_write_zeroes -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:21:00.681 [2024-12-06 18:19:11.144420] Starting SPDK v25.01-pre git sha1 1148849d6 / DPDK 24.03.0 initialization... 00:21:00.681 [2024-12-06 18:19:11.145174] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid74792 ] 00:21:00.940 [2024-12-06 18:19:11.331334] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:21:00.940 [2024-12-06 18:19:11.445527] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:21:01.506 Running I/O for 1 seconds... 00:21:02.439 62144.00 IOPS, 242.75 MiB/s 00:21:02.439 Latency(us) 00:21:02.439 [2024-12-06T18:19:13.016Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:21:02.440 Job: nvme0n1 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:21:02.440 nvme0n1 : 1.02 10021.57 39.15 0.00 0.00 12759.62 7895.90 31373.06 00:21:02.440 Job: nvme0n2 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:21:02.440 nvme0n2 : 1.02 10006.98 39.09 0.00 0.00 12771.36 7948.54 31373.06 00:21:02.440 Job: nvme0n3 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:21:02.440 nvme0n3 : 1.02 9992.61 39.03 0.00 0.00 12780.40 7895.90 31373.06 00:21:02.440 Job: nvme1n1 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:21:02.440 nvme1n1 : 1.03 11478.05 44.84 0.00 0.00 11119.19 6106.17 28214.70 00:21:02.440 Job: nvme2n1 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:21:02.440 nvme2n1 : 1.03 9964.58 38.92 0.00 0.00 12726.30 4105.87 27583.02 00:21:02.440 Job: nvme3n1 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:21:02.440 nvme3n1 : 1.03 9954.73 38.89 0.00 0.00 12732.76 4263.79 26424.96 00:21:02.440 [2024-12-06T18:19:13.016Z] =================================================================================================================== 00:21:02.440 [2024-12-06T18:19:13.016Z] Total : 61418.52 239.92 0.00 0.00 12447.03 4105.87 31373.06 00:21:03.814 00:21:03.814 real 0m3.048s 00:21:03.814 user 0m2.225s 00:21:03.814 sys 0m0.621s 00:21:03.814 18:19:14 blockdev_xnvme.bdev_write_zeroes -- common/autotest_common.sh@1130 -- # xtrace_disable 00:21:03.814 18:19:14 blockdev_xnvme.bdev_write_zeroes -- common/autotest_common.sh@10 -- # set +x 00:21:03.814 ************************************ 00:21:03.814 END TEST bdev_write_zeroes 00:21:03.814 ************************************ 00:21:03.814 18:19:14 blockdev_xnvme -- bdev/blockdev.sh@819 -- # run_test bdev_json_nonenclosed /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/nonenclosed.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:21:03.814 18:19:14 blockdev_xnvme -- common/autotest_common.sh@1105 -- # '[' 13 -le 1 ']' 00:21:03.814 18:19:14 blockdev_xnvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:21:03.814 18:19:14 blockdev_xnvme -- common/autotest_common.sh@10 -- # set +x 00:21:03.814 ************************************ 00:21:03.814 START TEST bdev_json_nonenclosed 00:21:03.814 ************************************ 00:21:03.814 18:19:14 blockdev_xnvme.bdev_json_nonenclosed -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/nonenclosed.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:21:03.814 [2024-12-06 18:19:14.265011] Starting SPDK v25.01-pre git sha1 1148849d6 / DPDK 24.03.0 initialization... 00:21:03.814 [2024-12-06 18:19:14.265158] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid74851 ] 00:21:04.073 [2024-12-06 18:19:14.452480] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:21:04.073 [2024-12-06 18:19:14.565830] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:21:04.073 [2024-12-06 18:19:14.565931] json_config.c: 608:json_config_prepare_ctx: *ERROR*: Invalid JSON configuration: not enclosed in {}. 00:21:04.073 [2024-12-06 18:19:14.565953] rpc.c: 190:spdk_rpc_server_finish: *ERROR*: No server listening on provided address: 00:21:04.073 [2024-12-06 18:19:14.565965] app.c:1064:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:21:04.332 00:21:04.332 real 0m0.666s 00:21:04.332 user 0m0.405s 00:21:04.332 sys 0m0.156s 00:21:04.332 18:19:14 blockdev_xnvme.bdev_json_nonenclosed -- common/autotest_common.sh@1130 -- # xtrace_disable 00:21:04.332 ************************************ 00:21:04.332 END TEST bdev_json_nonenclosed 00:21:04.332 ************************************ 00:21:04.332 18:19:14 blockdev_xnvme.bdev_json_nonenclosed -- common/autotest_common.sh@10 -- # set +x 00:21:04.332 18:19:14 blockdev_xnvme -- bdev/blockdev.sh@822 -- # run_test bdev_json_nonarray /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/nonarray.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:21:04.332 18:19:14 blockdev_xnvme -- common/autotest_common.sh@1105 -- # '[' 13 -le 1 ']' 00:21:04.332 18:19:14 blockdev_xnvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:21:04.332 18:19:14 blockdev_xnvme -- common/autotest_common.sh@10 -- # set +x 00:21:04.332 ************************************ 00:21:04.332 START TEST bdev_json_nonarray 00:21:04.332 ************************************ 00:21:04.332 18:19:14 blockdev_xnvme.bdev_json_nonarray -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/nonarray.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:21:04.590 [2024-12-06 18:19:15.005734] Starting SPDK v25.01-pre git sha1 1148849d6 / DPDK 24.03.0 initialization... 00:21:04.590 [2024-12-06 18:19:15.005873] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid74876 ] 00:21:04.849 [2024-12-06 18:19:15.196123] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:21:04.849 [2024-12-06 18:19:15.304788] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:21:04.849 [2024-12-06 18:19:15.304897] json_config.c: 614:json_config_prepare_ctx: *ERROR*: Invalid JSON configuration: 'subsystems' should be an array. 00:21:04.849 [2024-12-06 18:19:15.304921] rpc.c: 190:spdk_rpc_server_finish: *ERROR*: No server listening on provided address: 00:21:04.849 [2024-12-06 18:19:15.304933] app.c:1064:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:21:05.107 00:21:05.107 real 0m0.653s 00:21:05.107 user 0m0.401s 00:21:05.107 sys 0m0.148s 00:21:05.107 18:19:15 blockdev_xnvme.bdev_json_nonarray -- common/autotest_common.sh@1130 -- # xtrace_disable 00:21:05.107 18:19:15 blockdev_xnvme.bdev_json_nonarray -- common/autotest_common.sh@10 -- # set +x 00:21:05.107 ************************************ 00:21:05.107 END TEST bdev_json_nonarray 00:21:05.107 ************************************ 00:21:05.107 18:19:15 blockdev_xnvme -- bdev/blockdev.sh@824 -- # [[ xnvme == bdev ]] 00:21:05.107 18:19:15 blockdev_xnvme -- bdev/blockdev.sh@832 -- # [[ xnvme == gpt ]] 00:21:05.107 18:19:15 blockdev_xnvme -- bdev/blockdev.sh@836 -- # [[ xnvme == crypto_sw ]] 00:21:05.107 18:19:15 blockdev_xnvme -- bdev/blockdev.sh@848 -- # trap - SIGINT SIGTERM EXIT 00:21:05.107 18:19:15 blockdev_xnvme -- bdev/blockdev.sh@849 -- # cleanup 00:21:05.107 18:19:15 blockdev_xnvme -- bdev/blockdev.sh@23 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/aiofile 00:21:05.107 18:19:15 blockdev_xnvme -- bdev/blockdev.sh@24 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json 00:21:05.107 18:19:15 blockdev_xnvme -- bdev/blockdev.sh@26 -- # [[ xnvme == rbd ]] 00:21:05.107 18:19:15 blockdev_xnvme -- bdev/blockdev.sh@30 -- # [[ xnvme == daos ]] 00:21:05.107 18:19:15 blockdev_xnvme -- bdev/blockdev.sh@34 -- # [[ xnvme = \g\p\t ]] 00:21:05.107 18:19:15 blockdev_xnvme -- bdev/blockdev.sh@40 -- # [[ xnvme == xnvme ]] 00:21:05.107 18:19:15 blockdev_xnvme -- bdev/blockdev.sh@41 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:21:06.039 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:21:14.159 0000:00:11.0 (1b36 0010): nvme -> uio_pci_generic 00:21:14.159 0000:00:10.0 (1b36 0010): nvme -> uio_pci_generic 00:21:14.159 0000:00:12.0 (1b36 0010): nvme -> uio_pci_generic 00:21:14.159 0000:00:13.0 (1b36 0010): nvme -> uio_pci_generic 00:21:14.159 00:21:14.159 real 1m3.623s 00:21:14.159 user 1m36.003s 00:21:14.159 sys 0m37.278s 00:21:14.159 18:19:24 blockdev_xnvme -- common/autotest_common.sh@1130 -- # xtrace_disable 00:21:14.159 18:19:24 blockdev_xnvme -- common/autotest_common.sh@10 -- # set +x 00:21:14.159 ************************************ 00:21:14.159 END TEST blockdev_xnvme 00:21:14.159 ************************************ 00:21:14.159 18:19:24 -- spdk/autotest.sh@247 -- # run_test ublk /home/vagrant/spdk_repo/spdk/test/ublk/ublk.sh 00:21:14.159 18:19:24 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:21:14.159 18:19:24 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:21:14.159 18:19:24 -- common/autotest_common.sh@10 -- # set +x 00:21:14.159 ************************************ 00:21:14.159 START TEST ublk 00:21:14.159 ************************************ 00:21:14.159 18:19:24 ublk -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/ublk/ublk.sh 00:21:14.159 * Looking for test storage... 00:21:14.159 * Found test storage at /home/vagrant/spdk_repo/spdk/test/ublk 00:21:14.159 18:19:24 ublk -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:21:14.159 18:19:24 ublk -- common/autotest_common.sh@1711 -- # lcov --version 00:21:14.159 18:19:24 ublk -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:21:14.422 18:19:24 ublk -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:21:14.422 18:19:24 ublk -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:21:14.422 18:19:24 ublk -- scripts/common.sh@333 -- # local ver1 ver1_l 00:21:14.422 18:19:24 ublk -- scripts/common.sh@334 -- # local ver2 ver2_l 00:21:14.422 18:19:24 ublk -- scripts/common.sh@336 -- # IFS=.-: 00:21:14.422 18:19:24 ublk -- scripts/common.sh@336 -- # read -ra ver1 00:21:14.422 18:19:24 ublk -- scripts/common.sh@337 -- # IFS=.-: 00:21:14.422 18:19:24 ublk -- scripts/common.sh@337 -- # read -ra ver2 00:21:14.422 18:19:24 ublk -- scripts/common.sh@338 -- # local 'op=<' 00:21:14.422 18:19:24 ublk -- scripts/common.sh@340 -- # ver1_l=2 00:21:14.422 18:19:24 ublk -- scripts/common.sh@341 -- # ver2_l=1 00:21:14.422 18:19:24 ublk -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:21:14.422 18:19:24 ublk -- scripts/common.sh@344 -- # case "$op" in 00:21:14.422 18:19:24 ublk -- scripts/common.sh@345 -- # : 1 00:21:14.422 18:19:24 ublk -- scripts/common.sh@364 -- # (( v = 0 )) 00:21:14.422 18:19:24 ublk -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:21:14.422 18:19:24 ublk -- scripts/common.sh@365 -- # decimal 1 00:21:14.422 18:19:24 ublk -- scripts/common.sh@353 -- # local d=1 00:21:14.422 18:19:24 ublk -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:21:14.422 18:19:24 ublk -- scripts/common.sh@355 -- # echo 1 00:21:14.422 18:19:24 ublk -- scripts/common.sh@365 -- # ver1[v]=1 00:21:14.422 18:19:24 ublk -- scripts/common.sh@366 -- # decimal 2 00:21:14.422 18:19:24 ublk -- scripts/common.sh@353 -- # local d=2 00:21:14.422 18:19:24 ublk -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:21:14.422 18:19:24 ublk -- scripts/common.sh@355 -- # echo 2 00:21:14.422 18:19:24 ublk -- scripts/common.sh@366 -- # ver2[v]=2 00:21:14.422 18:19:24 ublk -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:21:14.422 18:19:24 ublk -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:21:14.422 18:19:24 ublk -- scripts/common.sh@368 -- # return 0 00:21:14.422 18:19:24 ublk -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:21:14.422 18:19:24 ublk -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:21:14.422 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:21:14.422 --rc genhtml_branch_coverage=1 00:21:14.422 --rc genhtml_function_coverage=1 00:21:14.422 --rc genhtml_legend=1 00:21:14.422 --rc geninfo_all_blocks=1 00:21:14.422 --rc geninfo_unexecuted_blocks=1 00:21:14.422 00:21:14.422 ' 00:21:14.422 18:19:24 ublk -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:21:14.422 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:21:14.422 --rc genhtml_branch_coverage=1 00:21:14.422 --rc genhtml_function_coverage=1 00:21:14.422 --rc genhtml_legend=1 00:21:14.422 --rc geninfo_all_blocks=1 00:21:14.422 --rc geninfo_unexecuted_blocks=1 00:21:14.422 00:21:14.422 ' 00:21:14.422 18:19:24 ublk -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:21:14.422 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:21:14.422 --rc genhtml_branch_coverage=1 00:21:14.422 --rc genhtml_function_coverage=1 00:21:14.422 --rc genhtml_legend=1 00:21:14.422 --rc geninfo_all_blocks=1 00:21:14.422 --rc geninfo_unexecuted_blocks=1 00:21:14.422 00:21:14.422 ' 00:21:14.422 18:19:24 ublk -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:21:14.422 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:21:14.422 --rc genhtml_branch_coverage=1 00:21:14.422 --rc genhtml_function_coverage=1 00:21:14.422 --rc genhtml_legend=1 00:21:14.422 --rc geninfo_all_blocks=1 00:21:14.422 --rc geninfo_unexecuted_blocks=1 00:21:14.422 00:21:14.422 ' 00:21:14.422 18:19:24 ublk -- ublk/ublk.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/lvol/common.sh 00:21:14.422 18:19:24 ublk -- lvol/common.sh@6 -- # MALLOC_SIZE_MB=128 00:21:14.422 18:19:24 ublk -- lvol/common.sh@7 -- # MALLOC_BS=512 00:21:14.423 18:19:24 ublk -- lvol/common.sh@8 -- # AIO_SIZE_MB=400 00:21:14.423 18:19:24 ublk -- lvol/common.sh@9 -- # AIO_BS=4096 00:21:14.423 18:19:24 ublk -- lvol/common.sh@10 -- # LVS_DEFAULT_CLUSTER_SIZE_MB=4 00:21:14.423 18:19:24 ublk -- lvol/common.sh@11 -- # LVS_DEFAULT_CLUSTER_SIZE=4194304 00:21:14.423 18:19:24 ublk -- lvol/common.sh@13 -- # LVS_DEFAULT_CAPACITY_MB=124 00:21:14.423 18:19:24 ublk -- lvol/common.sh@14 -- # LVS_DEFAULT_CAPACITY=130023424 00:21:14.423 18:19:24 ublk -- ublk/ublk.sh@11 -- # [[ -z '' ]] 00:21:14.423 18:19:24 ublk -- ublk/ublk.sh@12 -- # NUM_DEVS=4 00:21:14.423 18:19:24 ublk -- ublk/ublk.sh@13 -- # NUM_QUEUE=4 00:21:14.423 18:19:24 ublk -- ublk/ublk.sh@14 -- # QUEUE_DEPTH=512 00:21:14.423 18:19:24 ublk -- ublk/ublk.sh@15 -- # MALLOC_SIZE_MB=128 00:21:14.423 18:19:24 ublk -- ublk/ublk.sh@17 -- # STOP_DISKS=1 00:21:14.423 18:19:24 ublk -- ublk/ublk.sh@27 -- # MALLOC_BS=4096 00:21:14.423 18:19:24 ublk -- ublk/ublk.sh@28 -- # FILE_SIZE=134217728 00:21:14.423 18:19:24 ublk -- ublk/ublk.sh@29 -- # MAX_DEV_ID=3 00:21:14.423 18:19:24 ublk -- ublk/ublk.sh@133 -- # modprobe ublk_drv 00:21:14.423 18:19:24 ublk -- ublk/ublk.sh@136 -- # run_test test_save_ublk_config test_save_config 00:21:14.423 18:19:24 ublk -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:21:14.423 18:19:24 ublk -- common/autotest_common.sh@1111 -- # xtrace_disable 00:21:14.423 18:19:24 ublk -- common/autotest_common.sh@10 -- # set +x 00:21:14.423 ************************************ 00:21:14.423 START TEST test_save_ublk_config 00:21:14.423 ************************************ 00:21:14.423 18:19:24 ublk.test_save_ublk_config -- common/autotest_common.sh@1129 -- # test_save_config 00:21:14.423 18:19:24 ublk.test_save_ublk_config -- ublk/ublk.sh@100 -- # local tgtpid blkpath config 00:21:14.423 18:19:24 ublk.test_save_ublk_config -- ublk/ublk.sh@103 -- # tgtpid=75187 00:21:14.423 18:19:24 ublk.test_save_ublk_config -- ublk/ublk.sh@102 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -L ublk 00:21:14.423 18:19:24 ublk.test_save_ublk_config -- ublk/ublk.sh@104 -- # trap 'killprocess $tgtpid' EXIT 00:21:14.423 18:19:24 ublk.test_save_ublk_config -- ublk/ublk.sh@106 -- # waitforlisten 75187 00:21:14.423 18:19:24 ublk.test_save_ublk_config -- common/autotest_common.sh@835 -- # '[' -z 75187 ']' 00:21:14.423 18:19:24 ublk.test_save_ublk_config -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:21:14.423 18:19:24 ublk.test_save_ublk_config -- common/autotest_common.sh@840 -- # local max_retries=100 00:21:14.423 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:21:14.423 18:19:24 ublk.test_save_ublk_config -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:21:14.423 18:19:24 ublk.test_save_ublk_config -- common/autotest_common.sh@844 -- # xtrace_disable 00:21:14.423 18:19:24 ublk.test_save_ublk_config -- common/autotest_common.sh@10 -- # set +x 00:21:14.423 [2024-12-06 18:19:24.922439] Starting SPDK v25.01-pre git sha1 1148849d6 / DPDK 24.03.0 initialization... 00:21:14.423 [2024-12-06 18:19:24.923135] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid75187 ] 00:21:14.693 [2024-12-06 18:19:25.093286] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:21:14.693 [2024-12-06 18:19:25.200302] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:21:15.628 18:19:26 ublk.test_save_ublk_config -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:21:15.628 18:19:26 ublk.test_save_ublk_config -- common/autotest_common.sh@868 -- # return 0 00:21:15.628 18:19:26 ublk.test_save_ublk_config -- ublk/ublk.sh@107 -- # blkpath=/dev/ublkb0 00:21:15.628 18:19:26 ublk.test_save_ublk_config -- ublk/ublk.sh@108 -- # rpc_cmd 00:21:15.628 18:19:26 ublk.test_save_ublk_config -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:15.628 18:19:26 ublk.test_save_ublk_config -- common/autotest_common.sh@10 -- # set +x 00:21:15.628 [2024-12-06 18:19:26.108311] ublk.c: 572:ublk_ctrl_cmd_get_features: *NOTICE*: User Copy enabled 00:21:15.628 [2024-12-06 18:19:26.109381] ublk.c: 758:ublk_create_target: *NOTICE*: UBLK target created successfully 00:21:15.628 malloc0 00:21:15.628 [2024-12-06 18:19:26.194417] ublk.c:1924:ublk_start_disk: *DEBUG*: ublk0: bdev malloc0 num_queues 1 queue_depth 128 00:21:15.628 [2024-12-06 18:19:26.194507] ublk.c:1965:ublk_start_disk: *INFO*: Enabling kernel access to bdev malloc0 via ublk 0 00:21:15.628 [2024-12-06 18:19:26.194520] ublk.c: 971:ublk_dev_list_register: *DEBUG*: ublk0: add to tailq 00:21:15.629 [2024-12-06 18:19:26.194529] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_ADD_DEV 00:21:15.629 [2024-12-06 18:19:26.202398] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_ADD_DEV completed 00:21:15.629 [2024-12-06 18:19:26.202421] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_SET_PARAMS 00:21:15.887 [2024-12-06 18:19:26.209299] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_SET_PARAMS completed 00:21:15.887 [2024-12-06 18:19:26.209401] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_START_DEV 00:21:15.887 [2024-12-06 18:19:26.226291] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_START_DEV completed 00:21:15.887 0 00:21:15.887 18:19:26 ublk.test_save_ublk_config -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:15.887 18:19:26 ublk.test_save_ublk_config -- ublk/ublk.sh@115 -- # rpc_cmd save_config 00:21:15.887 18:19:26 ublk.test_save_ublk_config -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:15.887 18:19:26 ublk.test_save_ublk_config -- common/autotest_common.sh@10 -- # set +x 00:21:16.146 18:19:26 ublk.test_save_ublk_config -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:16.146 18:19:26 ublk.test_save_ublk_config -- ublk/ublk.sh@115 -- # config='{ 00:21:16.146 "subsystems": [ 00:21:16.146 { 00:21:16.146 "subsystem": "fsdev", 00:21:16.146 "config": [ 00:21:16.146 { 00:21:16.146 "method": "fsdev_set_opts", 00:21:16.146 "params": { 00:21:16.146 "fsdev_io_pool_size": 65535, 00:21:16.146 "fsdev_io_cache_size": 256 00:21:16.146 } 00:21:16.146 } 00:21:16.146 ] 00:21:16.146 }, 00:21:16.146 { 00:21:16.146 "subsystem": "keyring", 00:21:16.146 "config": [] 00:21:16.146 }, 00:21:16.146 { 00:21:16.146 "subsystem": "iobuf", 00:21:16.146 "config": [ 00:21:16.146 { 00:21:16.146 "method": "iobuf_set_options", 00:21:16.146 "params": { 00:21:16.146 "small_pool_count": 8192, 00:21:16.146 "large_pool_count": 1024, 00:21:16.146 "small_bufsize": 8192, 00:21:16.146 "large_bufsize": 135168, 00:21:16.146 "enable_numa": false 00:21:16.146 } 00:21:16.146 } 00:21:16.146 ] 00:21:16.146 }, 00:21:16.146 { 00:21:16.146 "subsystem": "sock", 00:21:16.146 "config": [ 00:21:16.146 { 00:21:16.146 "method": "sock_set_default_impl", 00:21:16.146 "params": { 00:21:16.146 "impl_name": "posix" 00:21:16.146 } 00:21:16.146 }, 00:21:16.146 { 00:21:16.146 "method": "sock_impl_set_options", 00:21:16.146 "params": { 00:21:16.146 "impl_name": "ssl", 00:21:16.146 "recv_buf_size": 4096, 00:21:16.146 "send_buf_size": 4096, 00:21:16.146 "enable_recv_pipe": true, 00:21:16.146 "enable_quickack": false, 00:21:16.146 "enable_placement_id": 0, 00:21:16.146 "enable_zerocopy_send_server": true, 00:21:16.146 "enable_zerocopy_send_client": false, 00:21:16.146 "zerocopy_threshold": 0, 00:21:16.146 "tls_version": 0, 00:21:16.146 "enable_ktls": false 00:21:16.146 } 00:21:16.146 }, 00:21:16.146 { 00:21:16.146 "method": "sock_impl_set_options", 00:21:16.146 "params": { 00:21:16.146 "impl_name": "posix", 00:21:16.146 "recv_buf_size": 2097152, 00:21:16.146 "send_buf_size": 2097152, 00:21:16.146 "enable_recv_pipe": true, 00:21:16.146 "enable_quickack": false, 00:21:16.146 "enable_placement_id": 0, 00:21:16.146 "enable_zerocopy_send_server": true, 00:21:16.146 "enable_zerocopy_send_client": false, 00:21:16.146 "zerocopy_threshold": 0, 00:21:16.146 "tls_version": 0, 00:21:16.146 "enable_ktls": false 00:21:16.146 } 00:21:16.146 } 00:21:16.146 ] 00:21:16.146 }, 00:21:16.146 { 00:21:16.146 "subsystem": "vmd", 00:21:16.146 "config": [] 00:21:16.146 }, 00:21:16.146 { 00:21:16.146 "subsystem": "accel", 00:21:16.146 "config": [ 00:21:16.146 { 00:21:16.146 "method": "accel_set_options", 00:21:16.146 "params": { 00:21:16.146 "small_cache_size": 128, 00:21:16.146 "large_cache_size": 16, 00:21:16.146 "task_count": 2048, 00:21:16.146 "sequence_count": 2048, 00:21:16.146 "buf_count": 2048 00:21:16.146 } 00:21:16.146 } 00:21:16.146 ] 00:21:16.146 }, 00:21:16.146 { 00:21:16.146 "subsystem": "bdev", 00:21:16.146 "config": [ 00:21:16.146 { 00:21:16.146 "method": "bdev_set_options", 00:21:16.146 "params": { 00:21:16.146 "bdev_io_pool_size": 65535, 00:21:16.146 "bdev_io_cache_size": 256, 00:21:16.146 "bdev_auto_examine": true, 00:21:16.146 "iobuf_small_cache_size": 128, 00:21:16.146 "iobuf_large_cache_size": 16 00:21:16.146 } 00:21:16.146 }, 00:21:16.146 { 00:21:16.146 "method": "bdev_raid_set_options", 00:21:16.146 "params": { 00:21:16.146 "process_window_size_kb": 1024, 00:21:16.146 "process_max_bandwidth_mb_sec": 0 00:21:16.146 } 00:21:16.146 }, 00:21:16.146 { 00:21:16.146 "method": "bdev_iscsi_set_options", 00:21:16.146 "params": { 00:21:16.146 "timeout_sec": 30 00:21:16.146 } 00:21:16.146 }, 00:21:16.146 { 00:21:16.146 "method": "bdev_nvme_set_options", 00:21:16.146 "params": { 00:21:16.146 "action_on_timeout": "none", 00:21:16.146 "timeout_us": 0, 00:21:16.146 "timeout_admin_us": 0, 00:21:16.146 "keep_alive_timeout_ms": 10000, 00:21:16.146 "arbitration_burst": 0, 00:21:16.146 "low_priority_weight": 0, 00:21:16.146 "medium_priority_weight": 0, 00:21:16.146 "high_priority_weight": 0, 00:21:16.146 "nvme_adminq_poll_period_us": 10000, 00:21:16.146 "nvme_ioq_poll_period_us": 0, 00:21:16.146 "io_queue_requests": 0, 00:21:16.146 "delay_cmd_submit": true, 00:21:16.146 "transport_retry_count": 4, 00:21:16.146 "bdev_retry_count": 3, 00:21:16.146 "transport_ack_timeout": 0, 00:21:16.146 "ctrlr_loss_timeout_sec": 0, 00:21:16.146 "reconnect_delay_sec": 0, 00:21:16.146 "fast_io_fail_timeout_sec": 0, 00:21:16.146 "disable_auto_failback": false, 00:21:16.146 "generate_uuids": false, 00:21:16.146 "transport_tos": 0, 00:21:16.146 "nvme_error_stat": false, 00:21:16.146 "rdma_srq_size": 0, 00:21:16.146 "io_path_stat": false, 00:21:16.146 "allow_accel_sequence": false, 00:21:16.146 "rdma_max_cq_size": 0, 00:21:16.146 "rdma_cm_event_timeout_ms": 0, 00:21:16.146 "dhchap_digests": [ 00:21:16.146 "sha256", 00:21:16.146 "sha384", 00:21:16.146 "sha512" 00:21:16.146 ], 00:21:16.146 "dhchap_dhgroups": [ 00:21:16.146 "null", 00:21:16.146 "ffdhe2048", 00:21:16.146 "ffdhe3072", 00:21:16.146 "ffdhe4096", 00:21:16.146 "ffdhe6144", 00:21:16.146 "ffdhe8192" 00:21:16.146 ], 00:21:16.146 "rdma_umr_per_io": false 00:21:16.146 } 00:21:16.146 }, 00:21:16.146 { 00:21:16.146 "method": "bdev_nvme_set_hotplug", 00:21:16.146 "params": { 00:21:16.146 "period_us": 100000, 00:21:16.146 "enable": false 00:21:16.146 } 00:21:16.146 }, 00:21:16.146 { 00:21:16.146 "method": "bdev_malloc_create", 00:21:16.146 "params": { 00:21:16.146 "name": "malloc0", 00:21:16.146 "num_blocks": 8192, 00:21:16.146 "block_size": 4096, 00:21:16.146 "physical_block_size": 4096, 00:21:16.146 "uuid": "e19fe1a7-2274-4f6c-8168-875efb932fca", 00:21:16.146 "optimal_io_boundary": 0, 00:21:16.146 "md_size": 0, 00:21:16.146 "dif_type": 0, 00:21:16.146 "dif_is_head_of_md": false, 00:21:16.146 "dif_pi_format": 0 00:21:16.146 } 00:21:16.146 }, 00:21:16.146 { 00:21:16.146 "method": "bdev_wait_for_examine" 00:21:16.146 } 00:21:16.146 ] 00:21:16.146 }, 00:21:16.146 { 00:21:16.146 "subsystem": "scsi", 00:21:16.146 "config": null 00:21:16.146 }, 00:21:16.146 { 00:21:16.146 "subsystem": "scheduler", 00:21:16.146 "config": [ 00:21:16.146 { 00:21:16.146 "method": "framework_set_scheduler", 00:21:16.146 "params": { 00:21:16.146 "name": "static" 00:21:16.146 } 00:21:16.146 } 00:21:16.146 ] 00:21:16.146 }, 00:21:16.146 { 00:21:16.146 "subsystem": "vhost_scsi", 00:21:16.146 "config": [] 00:21:16.146 }, 00:21:16.146 { 00:21:16.146 "subsystem": "vhost_blk", 00:21:16.146 "config": [] 00:21:16.146 }, 00:21:16.146 { 00:21:16.146 "subsystem": "ublk", 00:21:16.146 "config": [ 00:21:16.146 { 00:21:16.146 "method": "ublk_create_target", 00:21:16.146 "params": { 00:21:16.146 "cpumask": "1" 00:21:16.146 } 00:21:16.146 }, 00:21:16.146 { 00:21:16.146 "method": "ublk_start_disk", 00:21:16.146 "params": { 00:21:16.146 "bdev_name": "malloc0", 00:21:16.146 "ublk_id": 0, 00:21:16.147 "num_queues": 1, 00:21:16.147 "queue_depth": 128 00:21:16.147 } 00:21:16.147 } 00:21:16.147 ] 00:21:16.147 }, 00:21:16.147 { 00:21:16.147 "subsystem": "nbd", 00:21:16.147 "config": [] 00:21:16.147 }, 00:21:16.147 { 00:21:16.147 "subsystem": "nvmf", 00:21:16.147 "config": [ 00:21:16.147 { 00:21:16.147 "method": "nvmf_set_config", 00:21:16.147 "params": { 00:21:16.147 "discovery_filter": "match_any", 00:21:16.147 "admin_cmd_passthru": { 00:21:16.147 "identify_ctrlr": false 00:21:16.147 }, 00:21:16.147 "dhchap_digests": [ 00:21:16.147 "sha256", 00:21:16.147 "sha384", 00:21:16.147 "sha512" 00:21:16.147 ], 00:21:16.147 "dhchap_dhgroups": [ 00:21:16.147 "null", 00:21:16.147 "ffdhe2048", 00:21:16.147 "ffdhe3072", 00:21:16.147 "ffdhe4096", 00:21:16.147 "ffdhe6144", 00:21:16.147 "ffdhe8192" 00:21:16.147 ] 00:21:16.147 } 00:21:16.147 }, 00:21:16.147 { 00:21:16.147 "method": "nvmf_set_max_subsystems", 00:21:16.147 "params": { 00:21:16.147 "max_subsystems": 1024 00:21:16.147 } 00:21:16.147 }, 00:21:16.147 { 00:21:16.147 "method": "nvmf_set_crdt", 00:21:16.147 "params": { 00:21:16.147 "crdt1": 0, 00:21:16.147 "crdt2": 0, 00:21:16.147 "crdt3": 0 00:21:16.147 } 00:21:16.147 } 00:21:16.147 ] 00:21:16.147 }, 00:21:16.147 { 00:21:16.147 "subsystem": "iscsi", 00:21:16.147 "config": [ 00:21:16.147 { 00:21:16.147 "method": "iscsi_set_options", 00:21:16.147 "params": { 00:21:16.147 "node_base": "iqn.2016-06.io.spdk", 00:21:16.147 "max_sessions": 128, 00:21:16.147 "max_connections_per_session": 2, 00:21:16.147 "max_queue_depth": 64, 00:21:16.147 "default_time2wait": 2, 00:21:16.147 "default_time2retain": 20, 00:21:16.147 "first_burst_length": 8192, 00:21:16.147 "immediate_data": true, 00:21:16.147 "allow_duplicated_isid": false, 00:21:16.147 "error_recovery_level": 0, 00:21:16.147 "nop_timeout": 60, 00:21:16.147 "nop_in_interval": 30, 00:21:16.147 "disable_chap": false, 00:21:16.147 "require_chap": false, 00:21:16.147 "mutual_chap": false, 00:21:16.147 "chap_group": 0, 00:21:16.147 "max_large_datain_per_connection": 64, 00:21:16.147 "max_r2t_per_connection": 4, 00:21:16.147 "pdu_pool_size": 36864, 00:21:16.147 "immediate_data_pool_size": 16384, 00:21:16.147 "data_out_pool_size": 2048 00:21:16.147 } 00:21:16.147 } 00:21:16.147 ] 00:21:16.147 } 00:21:16.147 ] 00:21:16.147 }' 00:21:16.147 18:19:26 ublk.test_save_ublk_config -- ublk/ublk.sh@116 -- # killprocess 75187 00:21:16.147 18:19:26 ublk.test_save_ublk_config -- common/autotest_common.sh@954 -- # '[' -z 75187 ']' 00:21:16.147 18:19:26 ublk.test_save_ublk_config -- common/autotest_common.sh@958 -- # kill -0 75187 00:21:16.147 18:19:26 ublk.test_save_ublk_config -- common/autotest_common.sh@959 -- # uname 00:21:16.147 18:19:26 ublk.test_save_ublk_config -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:21:16.147 18:19:26 ublk.test_save_ublk_config -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 75187 00:21:16.147 18:19:26 ublk.test_save_ublk_config -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:21:16.147 18:19:26 ublk.test_save_ublk_config -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:21:16.147 killing process with pid 75187 00:21:16.147 18:19:26 ublk.test_save_ublk_config -- common/autotest_common.sh@972 -- # echo 'killing process with pid 75187' 00:21:16.147 18:19:26 ublk.test_save_ublk_config -- common/autotest_common.sh@973 -- # kill 75187 00:21:16.147 18:19:26 ublk.test_save_ublk_config -- common/autotest_common.sh@978 -- # wait 75187 00:21:17.525 [2024-12-06 18:19:28.022482] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_STOP_DEV 00:21:17.525 [2024-12-06 18:19:28.060370] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_STOP_DEV completed 00:21:17.525 [2024-12-06 18:19:28.060493] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_DEL_DEV 00:21:17.525 [2024-12-06 18:19:28.068321] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_DEL_DEV completed 00:21:17.525 [2024-12-06 18:19:28.068372] ublk.c: 985:ublk_dev_list_unregister: *DEBUG*: ublk0: remove from tailq 00:21:17.525 [2024-12-06 18:19:28.068386] ublk.c:1819:ublk_free_dev: *NOTICE*: ublk dev 0 stopped 00:21:17.525 [2024-12-06 18:19:28.068410] ublk.c: 835:_ublk_fini: *DEBUG*: finish shutdown 00:21:17.525 [2024-12-06 18:19:28.068553] ublk.c: 766:_ublk_fini_done: *DEBUG*: 00:21:19.451 18:19:29 ublk.test_save_ublk_config -- ublk/ublk.sh@119 -- # tgtpid=75255 00:21:19.451 18:19:29 ublk.test_save_ublk_config -- ublk/ublk.sh@121 -- # waitforlisten 75255 00:21:19.451 18:19:29 ublk.test_save_ublk_config -- common/autotest_common.sh@835 -- # '[' -z 75255 ']' 00:21:19.451 18:19:29 ublk.test_save_ublk_config -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:21:19.451 18:19:29 ublk.test_save_ublk_config -- common/autotest_common.sh@840 -- # local max_retries=100 00:21:19.451 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:21:19.451 18:19:29 ublk.test_save_ublk_config -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:21:19.451 18:19:29 ublk.test_save_ublk_config -- ublk/ublk.sh@118 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -L ublk -c /dev/fd/63 00:21:19.451 18:19:29 ublk.test_save_ublk_config -- common/autotest_common.sh@844 -- # xtrace_disable 00:21:19.451 18:19:29 ublk.test_save_ublk_config -- common/autotest_common.sh@10 -- # set +x 00:21:19.451 18:19:29 ublk.test_save_ublk_config -- ublk/ublk.sh@118 -- # echo '{ 00:21:19.451 "subsystems": [ 00:21:19.451 { 00:21:19.451 "subsystem": "fsdev", 00:21:19.451 "config": [ 00:21:19.451 { 00:21:19.451 "method": "fsdev_set_opts", 00:21:19.451 "params": { 00:21:19.451 "fsdev_io_pool_size": 65535, 00:21:19.451 "fsdev_io_cache_size": 256 00:21:19.451 } 00:21:19.451 } 00:21:19.451 ] 00:21:19.451 }, 00:21:19.451 { 00:21:19.451 "subsystem": "keyring", 00:21:19.451 "config": [] 00:21:19.451 }, 00:21:19.451 { 00:21:19.451 "subsystem": "iobuf", 00:21:19.451 "config": [ 00:21:19.451 { 00:21:19.451 "method": "iobuf_set_options", 00:21:19.451 "params": { 00:21:19.451 "small_pool_count": 8192, 00:21:19.451 "large_pool_count": 1024, 00:21:19.451 "small_bufsize": 8192, 00:21:19.451 "large_bufsize": 135168, 00:21:19.451 "enable_numa": false 00:21:19.451 } 00:21:19.451 } 00:21:19.451 ] 00:21:19.451 }, 00:21:19.451 { 00:21:19.451 "subsystem": "sock", 00:21:19.451 "config": [ 00:21:19.451 { 00:21:19.451 "method": "sock_set_default_impl", 00:21:19.451 "params": { 00:21:19.451 "impl_name": "posix" 00:21:19.451 } 00:21:19.451 }, 00:21:19.451 { 00:21:19.451 "method": "sock_impl_set_options", 00:21:19.451 "params": { 00:21:19.451 "impl_name": "ssl", 00:21:19.451 "recv_buf_size": 4096, 00:21:19.451 "send_buf_size": 4096, 00:21:19.451 "enable_recv_pipe": true, 00:21:19.451 "enable_quickack": false, 00:21:19.451 "enable_placement_id": 0, 00:21:19.451 "enable_zerocopy_send_server": true, 00:21:19.451 "enable_zerocopy_send_client": false, 00:21:19.451 "zerocopy_threshold": 0, 00:21:19.451 "tls_version": 0, 00:21:19.451 "enable_ktls": false 00:21:19.451 } 00:21:19.451 }, 00:21:19.451 { 00:21:19.451 "method": "sock_impl_set_options", 00:21:19.451 "params": { 00:21:19.451 "impl_name": "posix", 00:21:19.451 "recv_buf_size": 2097152, 00:21:19.451 "send_buf_size": 2097152, 00:21:19.451 "enable_recv_pipe": true, 00:21:19.451 "enable_quickack": false, 00:21:19.451 "enable_placement_id": 0, 00:21:19.451 "enable_zerocopy_send_server": true, 00:21:19.451 "enable_zerocopy_send_client": false, 00:21:19.451 "zerocopy_threshold": 0, 00:21:19.451 "tls_version": 0, 00:21:19.451 "enable_ktls": false 00:21:19.451 } 00:21:19.451 } 00:21:19.451 ] 00:21:19.451 }, 00:21:19.451 { 00:21:19.451 "subsystem": "vmd", 00:21:19.451 "config": [] 00:21:19.451 }, 00:21:19.451 { 00:21:19.451 "subsystem": "accel", 00:21:19.451 "config": [ 00:21:19.451 { 00:21:19.451 "method": "accel_set_options", 00:21:19.451 "params": { 00:21:19.451 "small_cache_size": 128, 00:21:19.452 "large_cache_size": 16, 00:21:19.452 "task_count": 2048, 00:21:19.452 "sequence_count": 2048, 00:21:19.452 "buf_count": 2048 00:21:19.452 } 00:21:19.452 } 00:21:19.452 ] 00:21:19.452 }, 00:21:19.452 { 00:21:19.452 "subsystem": "bdev", 00:21:19.452 "config": [ 00:21:19.452 { 00:21:19.452 "method": "bdev_set_options", 00:21:19.452 "params": { 00:21:19.452 "bdev_io_pool_size": 65535, 00:21:19.452 "bdev_io_cache_size": 256, 00:21:19.452 "bdev_auto_examine": true, 00:21:19.452 "iobuf_small_cache_size": 128, 00:21:19.452 "iobuf_large_cache_size": 16 00:21:19.452 } 00:21:19.452 }, 00:21:19.452 { 00:21:19.452 "method": "bdev_raid_set_options", 00:21:19.452 "params": { 00:21:19.452 "process_window_size_kb": 1024, 00:21:19.452 "process_max_bandwidth_mb_sec": 0 00:21:19.452 } 00:21:19.452 }, 00:21:19.452 { 00:21:19.452 "method": "bdev_iscsi_set_options", 00:21:19.452 "params": { 00:21:19.452 "timeout_sec": 30 00:21:19.452 } 00:21:19.452 }, 00:21:19.452 { 00:21:19.452 "method": "bdev_nvme_set_options", 00:21:19.452 "params": { 00:21:19.452 "action_on_timeout": "none", 00:21:19.452 "timeout_us": 0, 00:21:19.452 "timeout_admin_us": 0, 00:21:19.452 "keep_alive_timeout_ms": 10000, 00:21:19.452 "arbitration_burst": 0, 00:21:19.452 "low_priority_weight": 0, 00:21:19.452 "medium_priority_weight": 0, 00:21:19.452 "high_priority_weight": 0, 00:21:19.452 "nvme_adminq_poll_period_us": 10000, 00:21:19.452 "nvme_ioq_poll_period_us": 0, 00:21:19.452 "io_queue_requests": 0, 00:21:19.452 "delay_cmd_submit": true, 00:21:19.452 "transport_retry_count": 4, 00:21:19.452 "bdev_retry_count": 3, 00:21:19.452 "transport_ack_timeout": 0, 00:21:19.452 "ctrlr_loss_timeout_sec": 0, 00:21:19.452 "reconnect_delay_sec": 0, 00:21:19.452 "fast_io_fail_timeout_sec": 0, 00:21:19.452 "disable_auto_failback": false, 00:21:19.452 "generate_uuids": false, 00:21:19.452 "transport_tos": 0, 00:21:19.452 "nvme_error_stat": false, 00:21:19.452 "rdma_srq_size": 0, 00:21:19.452 "io_path_stat": false, 00:21:19.452 "allow_accel_sequence": false, 00:21:19.452 "rdma_max_cq_size": 0, 00:21:19.452 "rdma_cm_event_timeout_ms": 0, 00:21:19.452 "dhchap_digests": [ 00:21:19.452 "sha256", 00:21:19.452 "sha384", 00:21:19.452 "sha512" 00:21:19.452 ], 00:21:19.452 "dhchap_dhgroups": [ 00:21:19.452 "null", 00:21:19.452 "ffdhe2048", 00:21:19.452 "ffdhe3072", 00:21:19.452 "ffdhe4096", 00:21:19.452 "ffdhe6144", 00:21:19.452 "ffdhe8192" 00:21:19.452 ], 00:21:19.452 "rdma_umr_per_io": false 00:21:19.452 } 00:21:19.452 }, 00:21:19.452 { 00:21:19.452 "method": "bdev_nvme_set_hotplug", 00:21:19.452 "params": { 00:21:19.452 "period_us": 100000, 00:21:19.452 "enable": false 00:21:19.452 } 00:21:19.452 }, 00:21:19.452 { 00:21:19.452 "method": "bdev_malloc_create", 00:21:19.452 "params": { 00:21:19.452 "name": "malloc0", 00:21:19.452 "num_blocks": 8192, 00:21:19.452 "block_size": 4096, 00:21:19.452 "physical_block_size": 4096, 00:21:19.452 "uuid": "e19fe1a7-2274-4f6c-8168-875efb932fca", 00:21:19.452 "optimal_io_boundary": 0, 00:21:19.452 "md_size": 0, 00:21:19.452 "dif_type": 0, 00:21:19.452 "dif_is_head_of_md": false, 00:21:19.452 "dif_pi_format": 0 00:21:19.452 } 00:21:19.452 }, 00:21:19.452 { 00:21:19.452 "method": "bdev_wait_for_examine" 00:21:19.452 } 00:21:19.452 ] 00:21:19.452 }, 00:21:19.452 { 00:21:19.452 "subsystem": "scsi", 00:21:19.452 "config": null 00:21:19.452 }, 00:21:19.452 { 00:21:19.452 "subsystem": "scheduler", 00:21:19.452 "config": [ 00:21:19.452 { 00:21:19.452 "method": "framework_set_scheduler", 00:21:19.452 "params": { 00:21:19.452 "name": "static" 00:21:19.452 } 00:21:19.452 } 00:21:19.452 ] 00:21:19.452 }, 00:21:19.452 { 00:21:19.452 "subsystem": "vhost_scsi", 00:21:19.452 "config": [] 00:21:19.452 }, 00:21:19.452 { 00:21:19.452 "subsystem": "vhost_blk", 00:21:19.452 "config": [] 00:21:19.452 }, 00:21:19.452 { 00:21:19.452 "subsystem": "ublk", 00:21:19.452 "config": [ 00:21:19.452 { 00:21:19.452 "method": "ublk_create_target", 00:21:19.452 "params": { 00:21:19.452 "cpumask": "1" 00:21:19.452 } 00:21:19.452 }, 00:21:19.452 { 00:21:19.452 "method": "ublk_start_disk", 00:21:19.452 "params": { 00:21:19.452 "bdev_name": "malloc0", 00:21:19.452 "ublk_id": 0, 00:21:19.452 "num_queues": 1, 00:21:19.452 "queue_depth": 128 00:21:19.452 } 00:21:19.452 } 00:21:19.452 ] 00:21:19.452 }, 00:21:19.452 { 00:21:19.452 "subsystem": "nbd", 00:21:19.452 "config": [] 00:21:19.452 }, 00:21:19.452 { 00:21:19.452 "subsystem": "nvmf", 00:21:19.452 "config": [ 00:21:19.452 { 00:21:19.452 "method": "nvmf_set_config", 00:21:19.452 "params": { 00:21:19.452 "discovery_filter": "match_any", 00:21:19.452 "admin_cmd_passthru": { 00:21:19.452 "identify_ctrlr": false 00:21:19.452 }, 00:21:19.452 "dhchap_digests": [ 00:21:19.452 "sha256", 00:21:19.452 "sha384", 00:21:19.452 "sha512" 00:21:19.452 ], 00:21:19.452 "dhchap_dhgroups": [ 00:21:19.452 "null", 00:21:19.452 "ffdhe2048", 00:21:19.452 "ffdhe3072", 00:21:19.452 "ffdhe4096", 00:21:19.452 "ffdhe6144", 00:21:19.452 "ffdhe8192" 00:21:19.452 ] 00:21:19.452 } 00:21:19.452 }, 00:21:19.452 { 00:21:19.452 "method": "nvmf_set_max_subsystems", 00:21:19.452 "params": { 00:21:19.452 "max_subsystems": 1024 00:21:19.452 } 00:21:19.452 }, 00:21:19.452 { 00:21:19.452 "method": "nvmf_set_crdt", 00:21:19.452 "params": { 00:21:19.452 "crdt1": 0, 00:21:19.452 "crdt2": 0, 00:21:19.452 "crdt3": 0 00:21:19.452 } 00:21:19.452 } 00:21:19.452 ] 00:21:19.452 }, 00:21:19.452 { 00:21:19.452 "subsystem": "iscsi", 00:21:19.452 "config": [ 00:21:19.452 { 00:21:19.452 "method": "iscsi_set_options", 00:21:19.452 "params": { 00:21:19.452 "node_base": "iqn.2016-06.io.spdk", 00:21:19.452 "max_sessions": 128, 00:21:19.452 "max_connections_per_session": 2, 00:21:19.452 "max_queue_depth": 64, 00:21:19.452 "default_time2wait": 2, 00:21:19.452 "default_time2retain": 20, 00:21:19.452 "first_burst_length": 8192, 00:21:19.452 "immediate_data": true, 00:21:19.452 "allow_duplicated_isid": false, 00:21:19.452 "error_recovery_level": 0, 00:21:19.452 "nop_timeout": 60, 00:21:19.452 "nop_in_interval": 30, 00:21:19.452 "disable_chap": false, 00:21:19.452 "require_chap": false, 00:21:19.452 "mutual_chap": false, 00:21:19.452 "chap_group": 0, 00:21:19.452 "max_large_datain_per_connection": 64, 00:21:19.452 "max_r2t_per_connection": 4, 00:21:19.452 "pdu_pool_size": 36864, 00:21:19.452 "immediate_data_pool_size": 16384, 00:21:19.452 "data_out_pool_size": 2048 00:21:19.452 } 00:21:19.452 } 00:21:19.452 ] 00:21:19.452 } 00:21:19.452 ] 00:21:19.452 }' 00:21:19.711 [2024-12-06 18:19:30.039681] Starting SPDK v25.01-pre git sha1 1148849d6 / DPDK 24.03.0 initialization... 00:21:19.711 [2024-12-06 18:19:30.040337] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid75255 ] 00:21:19.711 [2024-12-06 18:19:30.219657] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:21:19.969 [2024-12-06 18:19:30.328633] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:21:20.915 [2024-12-06 18:19:31.356284] ublk.c: 572:ublk_ctrl_cmd_get_features: *NOTICE*: User Copy enabled 00:21:20.915 [2024-12-06 18:19:31.357547] ublk.c: 758:ublk_create_target: *NOTICE*: UBLK target created successfully 00:21:20.915 [2024-12-06 18:19:31.364407] ublk.c:1924:ublk_start_disk: *DEBUG*: ublk0: bdev malloc0 num_queues 1 queue_depth 128 00:21:20.915 [2024-12-06 18:19:31.364495] ublk.c:1965:ublk_start_disk: *INFO*: Enabling kernel access to bdev malloc0 via ublk 0 00:21:20.915 [2024-12-06 18:19:31.364507] ublk.c: 971:ublk_dev_list_register: *DEBUG*: ublk0: add to tailq 00:21:20.915 [2024-12-06 18:19:31.364515] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_ADD_DEV 00:21:20.915 [2024-12-06 18:19:31.373346] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_ADD_DEV completed 00:21:20.915 [2024-12-06 18:19:31.373368] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_SET_PARAMS 00:21:20.915 [2024-12-06 18:19:31.380293] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_SET_PARAMS completed 00:21:20.915 [2024-12-06 18:19:31.380392] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_START_DEV 00:21:20.915 [2024-12-06 18:19:31.397288] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_START_DEV completed 00:21:20.915 18:19:31 ublk.test_save_ublk_config -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:21:20.915 18:19:31 ublk.test_save_ublk_config -- common/autotest_common.sh@868 -- # return 0 00:21:20.916 18:19:31 ublk.test_save_ublk_config -- ublk/ublk.sh@122 -- # rpc_cmd ublk_get_disks 00:21:20.916 18:19:31 ublk.test_save_ublk_config -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:20.916 18:19:31 ublk.test_save_ublk_config -- common/autotest_common.sh@10 -- # set +x 00:21:20.916 18:19:31 ublk.test_save_ublk_config -- ublk/ublk.sh@122 -- # jq -r '.[0].ublk_device' 00:21:20.916 18:19:31 ublk.test_save_ublk_config -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:21.174 18:19:31 ublk.test_save_ublk_config -- ublk/ublk.sh@122 -- # [[ /dev/ublkb0 == \/\d\e\v\/\u\b\l\k\b\0 ]] 00:21:21.174 18:19:31 ublk.test_save_ublk_config -- ublk/ublk.sh@123 -- # [[ -b /dev/ublkb0 ]] 00:21:21.174 18:19:31 ublk.test_save_ublk_config -- ublk/ublk.sh@125 -- # killprocess 75255 00:21:21.174 18:19:31 ublk.test_save_ublk_config -- common/autotest_common.sh@954 -- # '[' -z 75255 ']' 00:21:21.174 18:19:31 ublk.test_save_ublk_config -- common/autotest_common.sh@958 -- # kill -0 75255 00:21:21.174 18:19:31 ublk.test_save_ublk_config -- common/autotest_common.sh@959 -- # uname 00:21:21.174 18:19:31 ublk.test_save_ublk_config -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:21:21.174 18:19:31 ublk.test_save_ublk_config -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 75255 00:21:21.174 18:19:31 ublk.test_save_ublk_config -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:21:21.174 18:19:31 ublk.test_save_ublk_config -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:21:21.174 killing process with pid 75255 00:21:21.174 18:19:31 ublk.test_save_ublk_config -- common/autotest_common.sh@972 -- # echo 'killing process with pid 75255' 00:21:21.174 18:19:31 ublk.test_save_ublk_config -- common/autotest_common.sh@973 -- # kill 75255 00:21:21.174 18:19:31 ublk.test_save_ublk_config -- common/autotest_common.sh@978 -- # wait 75255 00:21:22.548 [2024-12-06 18:19:33.077703] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_STOP_DEV 00:21:22.548 [2024-12-06 18:19:33.111309] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_STOP_DEV completed 00:21:22.549 [2024-12-06 18:19:33.111430] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_DEL_DEV 00:21:22.549 [2024-12-06 18:19:33.119301] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_DEL_DEV completed 00:21:22.549 [2024-12-06 18:19:33.119352] ublk.c: 985:ublk_dev_list_unregister: *DEBUG*: ublk0: remove from tailq 00:21:22.549 [2024-12-06 18:19:33.119360] ublk.c:1819:ublk_free_dev: *NOTICE*: ublk dev 0 stopped 00:21:22.549 [2024-12-06 18:19:33.119383] ublk.c: 835:_ublk_fini: *DEBUG*: finish shutdown 00:21:22.549 [2024-12-06 18:19:33.119521] ublk.c: 766:_ublk_fini_done: *DEBUG*: 00:21:24.455 18:19:34 ublk.test_save_ublk_config -- ublk/ublk.sh@126 -- # trap - EXIT 00:21:24.455 00:21:24.455 real 0m10.182s 00:21:24.455 user 0m7.927s 00:21:24.455 sys 0m3.069s 00:21:24.455 18:19:34 ublk.test_save_ublk_config -- common/autotest_common.sh@1130 -- # xtrace_disable 00:21:24.455 18:19:34 ublk.test_save_ublk_config -- common/autotest_common.sh@10 -- # set +x 00:21:24.455 ************************************ 00:21:24.455 END TEST test_save_ublk_config 00:21:24.455 ************************************ 00:21:24.715 18:19:35 ublk -- ublk/ublk.sh@139 -- # spdk_pid=75346 00:21:24.715 18:19:35 ublk -- ublk/ublk.sh@138 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x3 -L ublk 00:21:24.715 18:19:35 ublk -- ublk/ublk.sh@140 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:21:24.715 18:19:35 ublk -- ublk/ublk.sh@141 -- # waitforlisten 75346 00:21:24.715 18:19:35 ublk -- common/autotest_common.sh@835 -- # '[' -z 75346 ']' 00:21:24.715 18:19:35 ublk -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:21:24.715 18:19:35 ublk -- common/autotest_common.sh@840 -- # local max_retries=100 00:21:24.715 18:19:35 ublk -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:21:24.715 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:21:24.715 18:19:35 ublk -- common/autotest_common.sh@844 -- # xtrace_disable 00:21:24.715 18:19:35 ublk -- common/autotest_common.sh@10 -- # set +x 00:21:24.715 [2024-12-06 18:19:35.140095] Starting SPDK v25.01-pre git sha1 1148849d6 / DPDK 24.03.0 initialization... 00:21:24.715 [2024-12-06 18:19:35.140219] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid75346 ] 00:21:24.974 [2024-12-06 18:19:35.324543] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:21:24.974 [2024-12-06 18:19:35.440645] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:21:24.974 [2024-12-06 18:19:35.440680] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:21:25.909 18:19:36 ublk -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:21:25.909 18:19:36 ublk -- common/autotest_common.sh@868 -- # return 0 00:21:25.909 18:19:36 ublk -- ublk/ublk.sh@143 -- # run_test test_create_ublk test_create_ublk 00:21:25.909 18:19:36 ublk -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:21:25.909 18:19:36 ublk -- common/autotest_common.sh@1111 -- # xtrace_disable 00:21:25.909 18:19:36 ublk -- common/autotest_common.sh@10 -- # set +x 00:21:25.909 ************************************ 00:21:25.909 START TEST test_create_ublk 00:21:25.909 ************************************ 00:21:25.909 18:19:36 ublk.test_create_ublk -- common/autotest_common.sh@1129 -- # test_create_ublk 00:21:25.909 18:19:36 ublk.test_create_ublk -- ublk/ublk.sh@33 -- # rpc_cmd ublk_create_target 00:21:25.909 18:19:36 ublk.test_create_ublk -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:25.909 18:19:36 ublk.test_create_ublk -- common/autotest_common.sh@10 -- # set +x 00:21:25.909 [2024-12-06 18:19:36.364287] ublk.c: 572:ublk_ctrl_cmd_get_features: *NOTICE*: User Copy enabled 00:21:25.909 [2024-12-06 18:19:36.366721] ublk.c: 758:ublk_create_target: *NOTICE*: UBLK target created successfully 00:21:25.909 18:19:36 ublk.test_create_ublk -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:25.909 18:19:36 ublk.test_create_ublk -- ublk/ublk.sh@33 -- # ublk_target= 00:21:25.909 18:19:36 ublk.test_create_ublk -- ublk/ublk.sh@35 -- # rpc_cmd bdev_malloc_create 128 4096 00:21:25.909 18:19:36 ublk.test_create_ublk -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:25.909 18:19:36 ublk.test_create_ublk -- common/autotest_common.sh@10 -- # set +x 00:21:26.169 18:19:36 ublk.test_create_ublk -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:26.169 18:19:36 ublk.test_create_ublk -- ublk/ublk.sh@35 -- # malloc_name=Malloc0 00:21:26.169 18:19:36 ublk.test_create_ublk -- ublk/ublk.sh@37 -- # rpc_cmd ublk_start_disk Malloc0 0 -q 4 -d 512 00:21:26.169 18:19:36 ublk.test_create_ublk -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:26.169 18:19:36 ublk.test_create_ublk -- common/autotest_common.sh@10 -- # set +x 00:21:26.169 [2024-12-06 18:19:36.688437] ublk.c:1924:ublk_start_disk: *DEBUG*: ublk0: bdev Malloc0 num_queues 4 queue_depth 512 00:21:26.169 [2024-12-06 18:19:36.688871] ublk.c:1965:ublk_start_disk: *INFO*: Enabling kernel access to bdev Malloc0 via ublk 0 00:21:26.169 [2024-12-06 18:19:36.688892] ublk.c: 971:ublk_dev_list_register: *DEBUG*: ublk0: add to tailq 00:21:26.169 [2024-12-06 18:19:36.688901] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_ADD_DEV 00:21:26.169 [2024-12-06 18:19:36.696308] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_ADD_DEV completed 00:21:26.169 [2024-12-06 18:19:36.696330] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_SET_PARAMS 00:21:26.169 [2024-12-06 18:19:36.704300] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_SET_PARAMS completed 00:21:26.169 [2024-12-06 18:19:36.704898] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_START_DEV 00:21:26.169 [2024-12-06 18:19:36.735301] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_START_DEV completed 00:21:26.170 18:19:36 ublk.test_create_ublk -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:26.429 18:19:36 ublk.test_create_ublk -- ublk/ublk.sh@37 -- # ublk_id=0 00:21:26.429 18:19:36 ublk.test_create_ublk -- ublk/ublk.sh@38 -- # ublk_path=/dev/ublkb0 00:21:26.429 18:19:36 ublk.test_create_ublk -- ublk/ublk.sh@39 -- # rpc_cmd ublk_get_disks -n 0 00:21:26.429 18:19:36 ublk.test_create_ublk -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:26.429 18:19:36 ublk.test_create_ublk -- common/autotest_common.sh@10 -- # set +x 00:21:26.429 18:19:36 ublk.test_create_ublk -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:26.429 18:19:36 ublk.test_create_ublk -- ublk/ublk.sh@39 -- # ublk_dev='[ 00:21:26.429 { 00:21:26.429 "ublk_device": "/dev/ublkb0", 00:21:26.429 "id": 0, 00:21:26.429 "queue_depth": 512, 00:21:26.429 "num_queues": 4, 00:21:26.429 "bdev_name": "Malloc0" 00:21:26.429 } 00:21:26.429 ]' 00:21:26.429 18:19:36 ublk.test_create_ublk -- ublk/ublk.sh@41 -- # jq -r '.[0].ublk_device' 00:21:26.429 18:19:36 ublk.test_create_ublk -- ublk/ublk.sh@41 -- # [[ /dev/ublkb0 = \/\d\e\v\/\u\b\l\k\b\0 ]] 00:21:26.429 18:19:36 ublk.test_create_ublk -- ublk/ublk.sh@42 -- # jq -r '.[0].id' 00:21:26.429 18:19:36 ublk.test_create_ublk -- ublk/ublk.sh@42 -- # [[ 0 = \0 ]] 00:21:26.429 18:19:36 ublk.test_create_ublk -- ublk/ublk.sh@43 -- # jq -r '.[0].queue_depth' 00:21:26.429 18:19:36 ublk.test_create_ublk -- ublk/ublk.sh@43 -- # [[ 512 = \5\1\2 ]] 00:21:26.429 18:19:36 ublk.test_create_ublk -- ublk/ublk.sh@44 -- # jq -r '.[0].num_queues' 00:21:26.429 18:19:36 ublk.test_create_ublk -- ublk/ublk.sh@44 -- # [[ 4 = \4 ]] 00:21:26.429 18:19:36 ublk.test_create_ublk -- ublk/ublk.sh@45 -- # jq -r '.[0].bdev_name' 00:21:26.429 18:19:36 ublk.test_create_ublk -- ublk/ublk.sh@45 -- # [[ Malloc0 = \M\a\l\l\o\c\0 ]] 00:21:26.429 18:19:36 ublk.test_create_ublk -- ublk/ublk.sh@48 -- # run_fio_test /dev/ublkb0 0 134217728 write 0xcc '--time_based --runtime=10' 00:21:26.429 18:19:36 ublk.test_create_ublk -- lvol/common.sh@40 -- # local file=/dev/ublkb0 00:21:26.429 18:19:36 ublk.test_create_ublk -- lvol/common.sh@41 -- # local offset=0 00:21:26.429 18:19:36 ublk.test_create_ublk -- lvol/common.sh@42 -- # local size=134217728 00:21:26.429 18:19:36 ublk.test_create_ublk -- lvol/common.sh@43 -- # local rw=write 00:21:26.429 18:19:36 ublk.test_create_ublk -- lvol/common.sh@44 -- # local pattern=0xcc 00:21:26.429 18:19:36 ublk.test_create_ublk -- lvol/common.sh@45 -- # local 'extra_params=--time_based --runtime=10' 00:21:26.429 18:19:36 ublk.test_create_ublk -- lvol/common.sh@47 -- # local pattern_template= fio_template= 00:21:26.429 18:19:36 ublk.test_create_ublk -- lvol/common.sh@48 -- # [[ -n 0xcc ]] 00:21:26.429 18:19:36 ublk.test_create_ublk -- lvol/common.sh@49 -- # pattern_template='--do_verify=1 --verify=pattern --verify_pattern=0xcc --verify_state_save=0' 00:21:26.429 18:19:36 ublk.test_create_ublk -- lvol/common.sh@52 -- # fio_template='fio --name=fio_test --filename=/dev/ublkb0 --offset=0 --size=134217728 --rw=write --direct=1 --time_based --runtime=10 --do_verify=1 --verify=pattern --verify_pattern=0xcc --verify_state_save=0' 00:21:26.429 18:19:36 ublk.test_create_ublk -- lvol/common.sh@53 -- # fio --name=fio_test --filename=/dev/ublkb0 --offset=0 --size=134217728 --rw=write --direct=1 --time_based --runtime=10 --do_verify=1 --verify=pattern --verify_pattern=0xcc --verify_state_save=0 00:21:26.689 fio: verification read phase will never start because write phase uses all of runtime 00:21:26.689 fio_test: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=psync, iodepth=1 00:21:26.689 fio-3.35 00:21:26.689 Starting 1 process 00:21:36.680 00:21:36.680 fio_test: (groupid=0, jobs=1): err= 0: pid=75394: Fri Dec 6 18:19:47 2024 00:21:36.680 write: IOPS=14.8k, BW=57.6MiB/s (60.4MB/s)(576MiB/10001msec); 0 zone resets 00:21:36.680 clat (usec): min=37, max=4596, avg=66.92, stdev=100.84 00:21:36.680 lat (usec): min=38, max=4596, avg=67.41, stdev=100.85 00:21:36.680 clat percentiles (usec): 00:21:36.680 | 1.00th=[ 40], 5.00th=[ 51], 10.00th=[ 52], 20.00th=[ 54], 00:21:36.680 | 30.00th=[ 55], 40.00th=[ 56], 50.00th=[ 57], 60.00th=[ 59], 00:21:36.680 | 70.00th=[ 61], 80.00th=[ 64], 90.00th=[ 69], 95.00th=[ 145], 00:21:36.680 | 99.00th=[ 176], 99.50th=[ 184], 99.90th=[ 2008], 99.95th=[ 2802], 00:21:36.680 | 99.99th=[ 3556] 00:21:36.680 bw ( KiB/s): min=23784, max=76351, per=100.00%, avg=59199.95, stdev=14179.93, samples=19 00:21:36.680 iops : min= 5946, max=19087, avg=14799.95, stdev=3544.93, samples=19 00:21:36.680 lat (usec) : 50=4.23%, 100=90.25%, 250=5.32%, 500=0.02%, 750=0.01% 00:21:36.680 lat (usec) : 1000=0.02% 00:21:36.680 lat (msec) : 2=0.06%, 4=0.10%, 10=0.01% 00:21:36.680 cpu : usr=3.02%, sys=9.74%, ctx=147570, majf=0, minf=798 00:21:36.680 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:21:36.680 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:21:36.680 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:21:36.680 issued rwts: total=0,147570,0,0 short=0,0,0,0 dropped=0,0,0,0 00:21:36.680 latency : target=0, window=0, percentile=100.00%, depth=1 00:21:36.680 00:21:36.680 Run status group 0 (all jobs): 00:21:36.680 WRITE: bw=57.6MiB/s (60.4MB/s), 57.6MiB/s-57.6MiB/s (60.4MB/s-60.4MB/s), io=576MiB (604MB), run=10001-10001msec 00:21:36.680 00:21:36.680 Disk stats (read/write): 00:21:36.680 ublkb0: ios=0/146142, merge=0/0, ticks=0/8696, in_queue=8696, util=99.12% 00:21:36.680 18:19:47 ublk.test_create_ublk -- ublk/ublk.sh@51 -- # rpc_cmd ublk_stop_disk 0 00:21:36.680 18:19:47 ublk.test_create_ublk -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:36.680 18:19:47 ublk.test_create_ublk -- common/autotest_common.sh@10 -- # set +x 00:21:36.680 [2024-12-06 18:19:47.236832] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_STOP_DEV 00:21:36.939 [2024-12-06 18:19:47.291318] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_STOP_DEV completed 00:21:36.939 [2024-12-06 18:19:47.292164] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_DEL_DEV 00:21:36.939 [2024-12-06 18:19:47.300358] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_DEL_DEV completed 00:21:36.939 [2024-12-06 18:19:47.300670] ublk.c: 985:ublk_dev_list_unregister: *DEBUG*: ublk0: remove from tailq 00:21:36.939 [2024-12-06 18:19:47.300685] ublk.c:1819:ublk_free_dev: *NOTICE*: ublk dev 0 stopped 00:21:36.939 18:19:47 ublk.test_create_ublk -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:36.939 18:19:47 ublk.test_create_ublk -- ublk/ublk.sh@53 -- # NOT rpc_cmd ublk_stop_disk 0 00:21:36.939 18:19:47 ublk.test_create_ublk -- common/autotest_common.sh@652 -- # local es=0 00:21:36.939 18:19:47 ublk.test_create_ublk -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd ublk_stop_disk 0 00:21:36.939 18:19:47 ublk.test_create_ublk -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 00:21:36.939 18:19:47 ublk.test_create_ublk -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:21:36.939 18:19:47 ublk.test_create_ublk -- common/autotest_common.sh@644 -- # type -t rpc_cmd 00:21:36.939 18:19:47 ublk.test_create_ublk -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:21:36.939 18:19:47 ublk.test_create_ublk -- common/autotest_common.sh@655 -- # rpc_cmd ublk_stop_disk 0 00:21:36.939 18:19:47 ublk.test_create_ublk -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:36.939 18:19:47 ublk.test_create_ublk -- common/autotest_common.sh@10 -- # set +x 00:21:36.939 [2024-12-06 18:19:47.323396] ublk.c:1087:ublk_stop_disk: *ERROR*: no ublk dev with ublk_id=0 00:21:36.939 request: 00:21:36.939 { 00:21:36.939 "ublk_id": 0, 00:21:36.939 "method": "ublk_stop_disk", 00:21:36.939 "req_id": 1 00:21:36.939 } 00:21:36.939 Got JSON-RPC error response 00:21:36.939 response: 00:21:36.939 { 00:21:36.939 "code": -19, 00:21:36.939 "message": "No such device" 00:21:36.939 } 00:21:36.939 18:19:47 ublk.test_create_ublk -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:21:36.939 18:19:47 ublk.test_create_ublk -- common/autotest_common.sh@655 -- # es=1 00:21:36.939 18:19:47 ublk.test_create_ublk -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:21:36.939 18:19:47 ublk.test_create_ublk -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:21:36.939 18:19:47 ublk.test_create_ublk -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:21:36.939 18:19:47 ublk.test_create_ublk -- ublk/ublk.sh@54 -- # rpc_cmd ublk_destroy_target 00:21:36.939 18:19:47 ublk.test_create_ublk -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:36.939 18:19:47 ublk.test_create_ublk -- common/autotest_common.sh@10 -- # set +x 00:21:36.939 [2024-12-06 18:19:47.339424] ublk.c: 835:_ublk_fini: *DEBUG*: finish shutdown 00:21:36.939 [2024-12-06 18:19:47.347285] ublk.c: 766:_ublk_fini_done: *DEBUG*: 00:21:36.939 [2024-12-06 18:19:47.347349] ublk_rpc.c: 63:ublk_destroy_target_done: *NOTICE*: ublk target has been destroyed 00:21:36.939 18:19:47 ublk.test_create_ublk -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:36.939 18:19:47 ublk.test_create_ublk -- ublk/ublk.sh@56 -- # rpc_cmd bdev_malloc_delete Malloc0 00:21:36.939 18:19:47 ublk.test_create_ublk -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:36.939 18:19:47 ublk.test_create_ublk -- common/autotest_common.sh@10 -- # set +x 00:21:37.874 18:19:48 ublk.test_create_ublk -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:37.874 18:19:48 ublk.test_create_ublk -- ublk/ublk.sh@57 -- # check_leftover_devices 00:21:37.874 18:19:48 ublk.test_create_ublk -- lvol/common.sh@25 -- # rpc_cmd bdev_get_bdevs 00:21:37.874 18:19:48 ublk.test_create_ublk -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:37.874 18:19:48 ublk.test_create_ublk -- common/autotest_common.sh@10 -- # set +x 00:21:37.874 18:19:48 ublk.test_create_ublk -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:37.874 18:19:48 ublk.test_create_ublk -- lvol/common.sh@25 -- # leftover_bdevs='[]' 00:21:37.874 18:19:48 ublk.test_create_ublk -- lvol/common.sh@26 -- # jq length 00:21:37.874 18:19:48 ublk.test_create_ublk -- lvol/common.sh@26 -- # '[' 0 == 0 ']' 00:21:37.874 18:19:48 ublk.test_create_ublk -- lvol/common.sh@27 -- # rpc_cmd bdev_lvol_get_lvstores 00:21:37.874 18:19:48 ublk.test_create_ublk -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:37.874 18:19:48 ublk.test_create_ublk -- common/autotest_common.sh@10 -- # set +x 00:21:37.874 18:19:48 ublk.test_create_ublk -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:37.874 18:19:48 ublk.test_create_ublk -- lvol/common.sh@27 -- # leftover_lvs='[]' 00:21:37.874 18:19:48 ublk.test_create_ublk -- lvol/common.sh@28 -- # jq length 00:21:37.874 18:19:48 ublk.test_create_ublk -- lvol/common.sh@28 -- # '[' 0 == 0 ']' 00:21:37.874 00:21:37.874 real 0m11.846s 00:21:37.874 user 0m0.695s 00:21:37.874 sys 0m1.100s 00:21:37.874 18:19:48 ublk.test_create_ublk -- common/autotest_common.sh@1130 -- # xtrace_disable 00:21:37.874 18:19:48 ublk.test_create_ublk -- common/autotest_common.sh@10 -- # set +x 00:21:37.874 ************************************ 00:21:37.874 END TEST test_create_ublk 00:21:37.874 ************************************ 00:21:37.874 18:19:48 ublk -- ublk/ublk.sh@144 -- # run_test test_create_multi_ublk test_create_multi_ublk 00:21:37.874 18:19:48 ublk -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:21:37.874 18:19:48 ublk -- common/autotest_common.sh@1111 -- # xtrace_disable 00:21:37.874 18:19:48 ublk -- common/autotest_common.sh@10 -- # set +x 00:21:37.874 ************************************ 00:21:37.874 START TEST test_create_multi_ublk 00:21:37.874 ************************************ 00:21:37.874 18:19:48 ublk.test_create_multi_ublk -- common/autotest_common.sh@1129 -- # test_create_multi_ublk 00:21:37.874 18:19:48 ublk.test_create_multi_ublk -- ublk/ublk.sh@62 -- # rpc_cmd ublk_create_target 00:21:37.874 18:19:48 ublk.test_create_multi_ublk -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:37.874 18:19:48 ublk.test_create_multi_ublk -- common/autotest_common.sh@10 -- # set +x 00:21:37.874 [2024-12-06 18:19:48.281290] ublk.c: 572:ublk_ctrl_cmd_get_features: *NOTICE*: User Copy enabled 00:21:37.874 [2024-12-06 18:19:48.284027] ublk.c: 758:ublk_create_target: *NOTICE*: UBLK target created successfully 00:21:37.874 18:19:48 ublk.test_create_multi_ublk -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:37.874 18:19:48 ublk.test_create_multi_ublk -- ublk/ublk.sh@62 -- # ublk_target= 00:21:37.874 18:19:48 ublk.test_create_multi_ublk -- ublk/ublk.sh@64 -- # seq 0 3 00:21:37.874 18:19:48 ublk.test_create_multi_ublk -- ublk/ublk.sh@64 -- # for i in $(seq 0 $MAX_DEV_ID) 00:21:37.874 18:19:48 ublk.test_create_multi_ublk -- ublk/ublk.sh@66 -- # rpc_cmd bdev_malloc_create -b Malloc0 128 4096 00:21:37.874 18:19:48 ublk.test_create_multi_ublk -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:37.874 18:19:48 ublk.test_create_multi_ublk -- common/autotest_common.sh@10 -- # set +x 00:21:38.133 18:19:48 ublk.test_create_multi_ublk -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:38.133 18:19:48 ublk.test_create_multi_ublk -- ublk/ublk.sh@66 -- # malloc_name=Malloc0 00:21:38.133 18:19:48 ublk.test_create_multi_ublk -- ublk/ublk.sh@68 -- # rpc_cmd ublk_start_disk Malloc0 0 -q 4 -d 512 00:21:38.133 18:19:48 ublk.test_create_multi_ublk -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:38.133 18:19:48 ublk.test_create_multi_ublk -- common/autotest_common.sh@10 -- # set +x 00:21:38.133 [2024-12-06 18:19:48.559466] ublk.c:1924:ublk_start_disk: *DEBUG*: ublk0: bdev Malloc0 num_queues 4 queue_depth 512 00:21:38.133 [2024-12-06 18:19:48.559922] ublk.c:1965:ublk_start_disk: *INFO*: Enabling kernel access to bdev Malloc0 via ublk 0 00:21:38.133 [2024-12-06 18:19:48.559935] ublk.c: 971:ublk_dev_list_register: *DEBUG*: ublk0: add to tailq 00:21:38.133 [2024-12-06 18:19:48.559949] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_ADD_DEV 00:21:38.133 [2024-12-06 18:19:48.567320] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_ADD_DEV completed 00:21:38.133 [2024-12-06 18:19:48.567355] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_SET_PARAMS 00:21:38.133 [2024-12-06 18:19:48.575322] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_SET_PARAMS completed 00:21:38.133 [2024-12-06 18:19:48.576018] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_START_DEV 00:21:38.133 [2024-12-06 18:19:48.605298] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_START_DEV completed 00:21:38.133 18:19:48 ublk.test_create_multi_ublk -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:38.133 18:19:48 ublk.test_create_multi_ublk -- ublk/ublk.sh@68 -- # ublk_id=0 00:21:38.133 18:19:48 ublk.test_create_multi_ublk -- ublk/ublk.sh@64 -- # for i in $(seq 0 $MAX_DEV_ID) 00:21:38.133 18:19:48 ublk.test_create_multi_ublk -- ublk/ublk.sh@66 -- # rpc_cmd bdev_malloc_create -b Malloc1 128 4096 00:21:38.133 18:19:48 ublk.test_create_multi_ublk -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:38.133 18:19:48 ublk.test_create_multi_ublk -- common/autotest_common.sh@10 -- # set +x 00:21:38.390 18:19:48 ublk.test_create_multi_ublk -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:38.390 18:19:48 ublk.test_create_multi_ublk -- ublk/ublk.sh@66 -- # malloc_name=Malloc1 00:21:38.390 18:19:48 ublk.test_create_multi_ublk -- ublk/ublk.sh@68 -- # rpc_cmd ublk_start_disk Malloc1 1 -q 4 -d 512 00:21:38.390 18:19:48 ublk.test_create_multi_ublk -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:38.390 18:19:48 ublk.test_create_multi_ublk -- common/autotest_common.sh@10 -- # set +x 00:21:38.390 [2024-12-06 18:19:48.927456] ublk.c:1924:ublk_start_disk: *DEBUG*: ublk1: bdev Malloc1 num_queues 4 queue_depth 512 00:21:38.390 [2024-12-06 18:19:48.927919] ublk.c:1965:ublk_start_disk: *INFO*: Enabling kernel access to bdev Malloc1 via ublk 1 00:21:38.390 [2024-12-06 18:19:48.927937] ublk.c: 971:ublk_dev_list_register: *DEBUG*: ublk1: add to tailq 00:21:38.390 [2024-12-06 18:19:48.927946] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_ADD_DEV 00:21:38.390 [2024-12-06 18:19:48.935818] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_ADD_DEV completed 00:21:38.390 [2024-12-06 18:19:48.935853] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_SET_PARAMS 00:21:38.390 [2024-12-06 18:19:48.946313] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_SET_PARAMS completed 00:21:38.390 [2024-12-06 18:19:48.946998] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_START_DEV 00:21:38.390 [2024-12-06 18:19:48.959299] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_START_DEV completed 00:21:38.647 18:19:48 ublk.test_create_multi_ublk -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:38.647 18:19:48 ublk.test_create_multi_ublk -- ublk/ublk.sh@68 -- # ublk_id=1 00:21:38.647 18:19:48 ublk.test_create_multi_ublk -- ublk/ublk.sh@64 -- # for i in $(seq 0 $MAX_DEV_ID) 00:21:38.647 18:19:48 ublk.test_create_multi_ublk -- ublk/ublk.sh@66 -- # rpc_cmd bdev_malloc_create -b Malloc2 128 4096 00:21:38.647 18:19:48 ublk.test_create_multi_ublk -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:38.647 18:19:48 ublk.test_create_multi_ublk -- common/autotest_common.sh@10 -- # set +x 00:21:38.905 18:19:49 ublk.test_create_multi_ublk -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:38.905 18:19:49 ublk.test_create_multi_ublk -- ublk/ublk.sh@66 -- # malloc_name=Malloc2 00:21:38.905 18:19:49 ublk.test_create_multi_ublk -- ublk/ublk.sh@68 -- # rpc_cmd ublk_start_disk Malloc2 2 -q 4 -d 512 00:21:38.905 18:19:49 ublk.test_create_multi_ublk -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:38.905 18:19:49 ublk.test_create_multi_ublk -- common/autotest_common.sh@10 -- # set +x 00:21:38.905 [2024-12-06 18:19:49.270459] ublk.c:1924:ublk_start_disk: *DEBUG*: ublk2: bdev Malloc2 num_queues 4 queue_depth 512 00:21:38.905 [2024-12-06 18:19:49.270916] ublk.c:1965:ublk_start_disk: *INFO*: Enabling kernel access to bdev Malloc2 via ublk 2 00:21:38.905 [2024-12-06 18:19:49.270934] ublk.c: 971:ublk_dev_list_register: *DEBUG*: ublk2: add to tailq 00:21:38.905 [2024-12-06 18:19:49.270945] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk2: ctrl cmd UBLK_CMD_ADD_DEV 00:21:38.905 [2024-12-06 18:19:49.278335] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk2: ctrl cmd UBLK_CMD_ADD_DEV completed 00:21:38.905 [2024-12-06 18:19:49.278366] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk2: ctrl cmd UBLK_CMD_SET_PARAMS 00:21:38.905 [2024-12-06 18:19:49.286305] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk2: ctrl cmd UBLK_CMD_SET_PARAMS completed 00:21:38.905 [2024-12-06 18:19:49.286924] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk2: ctrl cmd UBLK_CMD_START_DEV 00:21:38.905 [2024-12-06 18:19:49.289892] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk2: ctrl cmd UBLK_CMD_START_DEV completed 00:21:38.905 18:19:49 ublk.test_create_multi_ublk -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:38.905 18:19:49 ublk.test_create_multi_ublk -- ublk/ublk.sh@68 -- # ublk_id=2 00:21:38.905 18:19:49 ublk.test_create_multi_ublk -- ublk/ublk.sh@64 -- # for i in $(seq 0 $MAX_DEV_ID) 00:21:38.905 18:19:49 ublk.test_create_multi_ublk -- ublk/ublk.sh@66 -- # rpc_cmd bdev_malloc_create -b Malloc3 128 4096 00:21:38.905 18:19:49 ublk.test_create_multi_ublk -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:38.905 18:19:49 ublk.test_create_multi_ublk -- common/autotest_common.sh@10 -- # set +x 00:21:39.163 18:19:49 ublk.test_create_multi_ublk -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:39.163 18:19:49 ublk.test_create_multi_ublk -- ublk/ublk.sh@66 -- # malloc_name=Malloc3 00:21:39.163 18:19:49 ublk.test_create_multi_ublk -- ublk/ublk.sh@68 -- # rpc_cmd ublk_start_disk Malloc3 3 -q 4 -d 512 00:21:39.163 18:19:49 ublk.test_create_multi_ublk -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:39.163 18:19:49 ublk.test_create_multi_ublk -- common/autotest_common.sh@10 -- # set +x 00:21:39.163 [2024-12-06 18:19:49.573467] ublk.c:1924:ublk_start_disk: *DEBUG*: ublk3: bdev Malloc3 num_queues 4 queue_depth 512 00:21:39.163 [2024-12-06 18:19:49.573920] ublk.c:1965:ublk_start_disk: *INFO*: Enabling kernel access to bdev Malloc3 via ublk 3 00:21:39.163 [2024-12-06 18:19:49.573936] ublk.c: 971:ublk_dev_list_register: *DEBUG*: ublk3: add to tailq 00:21:39.163 [2024-12-06 18:19:49.573944] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk3: ctrl cmd UBLK_CMD_ADD_DEV 00:21:39.163 [2024-12-06 18:19:49.581339] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk3: ctrl cmd UBLK_CMD_ADD_DEV completed 00:21:39.163 [2024-12-06 18:19:49.581366] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk3: ctrl cmd UBLK_CMD_SET_PARAMS 00:21:39.163 [2024-12-06 18:19:49.589328] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk3: ctrl cmd UBLK_CMD_SET_PARAMS completed 00:21:39.163 [2024-12-06 18:19:49.589986] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk3: ctrl cmd UBLK_CMD_START_DEV 00:21:39.163 [2024-12-06 18:19:49.593996] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk3: ctrl cmd UBLK_CMD_START_DEV completed 00:21:39.163 18:19:49 ublk.test_create_multi_ublk -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:39.163 18:19:49 ublk.test_create_multi_ublk -- ublk/ublk.sh@68 -- # ublk_id=3 00:21:39.163 18:19:49 ublk.test_create_multi_ublk -- ublk/ublk.sh@71 -- # rpc_cmd ublk_get_disks 00:21:39.163 18:19:49 ublk.test_create_multi_ublk -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:39.163 18:19:49 ublk.test_create_multi_ublk -- common/autotest_common.sh@10 -- # set +x 00:21:39.163 18:19:49 ublk.test_create_multi_ublk -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:39.163 18:19:49 ublk.test_create_multi_ublk -- ublk/ublk.sh@71 -- # ublk_dev='[ 00:21:39.163 { 00:21:39.163 "ublk_device": "/dev/ublkb0", 00:21:39.163 "id": 0, 00:21:39.163 "queue_depth": 512, 00:21:39.163 "num_queues": 4, 00:21:39.163 "bdev_name": "Malloc0" 00:21:39.163 }, 00:21:39.163 { 00:21:39.163 "ublk_device": "/dev/ublkb1", 00:21:39.163 "id": 1, 00:21:39.163 "queue_depth": 512, 00:21:39.163 "num_queues": 4, 00:21:39.163 "bdev_name": "Malloc1" 00:21:39.163 }, 00:21:39.163 { 00:21:39.163 "ublk_device": "/dev/ublkb2", 00:21:39.163 "id": 2, 00:21:39.163 "queue_depth": 512, 00:21:39.163 "num_queues": 4, 00:21:39.163 "bdev_name": "Malloc2" 00:21:39.163 }, 00:21:39.163 { 00:21:39.163 "ublk_device": "/dev/ublkb3", 00:21:39.163 "id": 3, 00:21:39.163 "queue_depth": 512, 00:21:39.163 "num_queues": 4, 00:21:39.163 "bdev_name": "Malloc3" 00:21:39.163 } 00:21:39.163 ]' 00:21:39.163 18:19:49 ublk.test_create_multi_ublk -- ublk/ublk.sh@72 -- # seq 0 3 00:21:39.163 18:19:49 ublk.test_create_multi_ublk -- ublk/ublk.sh@72 -- # for i in $(seq 0 $MAX_DEV_ID) 00:21:39.163 18:19:49 ublk.test_create_multi_ublk -- ublk/ublk.sh@74 -- # jq -r '.[0].ublk_device' 00:21:39.163 18:19:49 ublk.test_create_multi_ublk -- ublk/ublk.sh@74 -- # [[ /dev/ublkb0 = \/\d\e\v\/\u\b\l\k\b\0 ]] 00:21:39.163 18:19:49 ublk.test_create_multi_ublk -- ublk/ublk.sh@75 -- # jq -r '.[0].id' 00:21:39.163 18:19:49 ublk.test_create_multi_ublk -- ublk/ublk.sh@75 -- # [[ 0 = \0 ]] 00:21:39.163 18:19:49 ublk.test_create_multi_ublk -- ublk/ublk.sh@76 -- # jq -r '.[0].queue_depth' 00:21:39.422 18:19:49 ublk.test_create_multi_ublk -- ublk/ublk.sh@76 -- # [[ 512 = \5\1\2 ]] 00:21:39.422 18:19:49 ublk.test_create_multi_ublk -- ublk/ublk.sh@77 -- # jq -r '.[0].num_queues' 00:21:39.422 18:19:49 ublk.test_create_multi_ublk -- ublk/ublk.sh@77 -- # [[ 4 = \4 ]] 00:21:39.422 18:19:49 ublk.test_create_multi_ublk -- ublk/ublk.sh@78 -- # jq -r '.[0].bdev_name' 00:21:39.422 18:19:49 ublk.test_create_multi_ublk -- ublk/ublk.sh@78 -- # [[ Malloc0 = \M\a\l\l\o\c\0 ]] 00:21:39.422 18:19:49 ublk.test_create_multi_ublk -- ublk/ublk.sh@72 -- # for i in $(seq 0 $MAX_DEV_ID) 00:21:39.422 18:19:49 ublk.test_create_multi_ublk -- ublk/ublk.sh@74 -- # jq -r '.[1].ublk_device' 00:21:39.422 18:19:49 ublk.test_create_multi_ublk -- ublk/ublk.sh@74 -- # [[ /dev/ublkb1 = \/\d\e\v\/\u\b\l\k\b\1 ]] 00:21:39.422 18:19:49 ublk.test_create_multi_ublk -- ublk/ublk.sh@75 -- # jq -r '.[1].id' 00:21:39.422 18:19:49 ublk.test_create_multi_ublk -- ublk/ublk.sh@75 -- # [[ 1 = \1 ]] 00:21:39.422 18:19:49 ublk.test_create_multi_ublk -- ublk/ublk.sh@76 -- # jq -r '.[1].queue_depth' 00:21:39.422 18:19:49 ublk.test_create_multi_ublk -- ublk/ublk.sh@76 -- # [[ 512 = \5\1\2 ]] 00:21:39.422 18:19:49 ublk.test_create_multi_ublk -- ublk/ublk.sh@77 -- # jq -r '.[1].num_queues' 00:21:39.681 18:19:50 ublk.test_create_multi_ublk -- ublk/ublk.sh@77 -- # [[ 4 = \4 ]] 00:21:39.681 18:19:50 ublk.test_create_multi_ublk -- ublk/ublk.sh@78 -- # jq -r '.[1].bdev_name' 00:21:39.681 18:19:50 ublk.test_create_multi_ublk -- ublk/ublk.sh@78 -- # [[ Malloc1 = \M\a\l\l\o\c\1 ]] 00:21:39.681 18:19:50 ublk.test_create_multi_ublk -- ublk/ublk.sh@72 -- # for i in $(seq 0 $MAX_DEV_ID) 00:21:39.681 18:19:50 ublk.test_create_multi_ublk -- ublk/ublk.sh@74 -- # jq -r '.[2].ublk_device' 00:21:39.681 18:19:50 ublk.test_create_multi_ublk -- ublk/ublk.sh@74 -- # [[ /dev/ublkb2 = \/\d\e\v\/\u\b\l\k\b\2 ]] 00:21:39.681 18:19:50 ublk.test_create_multi_ublk -- ublk/ublk.sh@75 -- # jq -r '.[2].id' 00:21:39.681 18:19:50 ublk.test_create_multi_ublk -- ublk/ublk.sh@75 -- # [[ 2 = \2 ]] 00:21:39.681 18:19:50 ublk.test_create_multi_ublk -- ublk/ublk.sh@76 -- # jq -r '.[2].queue_depth' 00:21:39.681 18:19:50 ublk.test_create_multi_ublk -- ublk/ublk.sh@76 -- # [[ 512 = \5\1\2 ]] 00:21:39.681 18:19:50 ublk.test_create_multi_ublk -- ublk/ublk.sh@77 -- # jq -r '.[2].num_queues' 00:21:39.681 18:19:50 ublk.test_create_multi_ublk -- ublk/ublk.sh@77 -- # [[ 4 = \4 ]] 00:21:39.681 18:19:50 ublk.test_create_multi_ublk -- ublk/ublk.sh@78 -- # jq -r '.[2].bdev_name' 00:21:39.939 18:19:50 ublk.test_create_multi_ublk -- ublk/ublk.sh@78 -- # [[ Malloc2 = \M\a\l\l\o\c\2 ]] 00:21:39.939 18:19:50 ublk.test_create_multi_ublk -- ublk/ublk.sh@72 -- # for i in $(seq 0 $MAX_DEV_ID) 00:21:39.939 18:19:50 ublk.test_create_multi_ublk -- ublk/ublk.sh@74 -- # jq -r '.[3].ublk_device' 00:21:39.939 18:19:50 ublk.test_create_multi_ublk -- ublk/ublk.sh@74 -- # [[ /dev/ublkb3 = \/\d\e\v\/\u\b\l\k\b\3 ]] 00:21:39.939 18:19:50 ublk.test_create_multi_ublk -- ublk/ublk.sh@75 -- # jq -r '.[3].id' 00:21:39.939 18:19:50 ublk.test_create_multi_ublk -- ublk/ublk.sh@75 -- # [[ 3 = \3 ]] 00:21:39.939 18:19:50 ublk.test_create_multi_ublk -- ublk/ublk.sh@76 -- # jq -r '.[3].queue_depth' 00:21:39.939 18:19:50 ublk.test_create_multi_ublk -- ublk/ublk.sh@76 -- # [[ 512 = \5\1\2 ]] 00:21:39.939 18:19:50 ublk.test_create_multi_ublk -- ublk/ublk.sh@77 -- # jq -r '.[3].num_queues' 00:21:39.939 18:19:50 ublk.test_create_multi_ublk -- ublk/ublk.sh@77 -- # [[ 4 = \4 ]] 00:21:39.939 18:19:50 ublk.test_create_multi_ublk -- ublk/ublk.sh@78 -- # jq -r '.[3].bdev_name' 00:21:40.198 18:19:50 ublk.test_create_multi_ublk -- ublk/ublk.sh@78 -- # [[ Malloc3 = \M\a\l\l\o\c\3 ]] 00:21:40.198 18:19:50 ublk.test_create_multi_ublk -- ublk/ublk.sh@84 -- # [[ 1 = \1 ]] 00:21:40.198 18:19:50 ublk.test_create_multi_ublk -- ublk/ublk.sh@85 -- # seq 0 3 00:21:40.198 18:19:50 ublk.test_create_multi_ublk -- ublk/ublk.sh@85 -- # for i in $(seq 0 $MAX_DEV_ID) 00:21:40.198 18:19:50 ublk.test_create_multi_ublk -- ublk/ublk.sh@86 -- # rpc_cmd ublk_stop_disk 0 00:21:40.198 18:19:50 ublk.test_create_multi_ublk -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:40.198 18:19:50 ublk.test_create_multi_ublk -- common/autotest_common.sh@10 -- # set +x 00:21:40.198 [2024-12-06 18:19:50.530443] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_STOP_DEV 00:21:40.198 [2024-12-06 18:19:50.569351] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_STOP_DEV completed 00:21:40.198 [2024-12-06 18:19:50.570280] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_DEL_DEV 00:21:40.198 [2024-12-06 18:19:50.579384] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_DEL_DEV completed 00:21:40.198 [2024-12-06 18:19:50.579704] ublk.c: 985:ublk_dev_list_unregister: *DEBUG*: ublk0: remove from tailq 00:21:40.198 [2024-12-06 18:19:50.579719] ublk.c:1819:ublk_free_dev: *NOTICE*: ublk dev 0 stopped 00:21:40.198 18:19:50 ublk.test_create_multi_ublk -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:40.198 18:19:50 ublk.test_create_multi_ublk -- ublk/ublk.sh@85 -- # for i in $(seq 0 $MAX_DEV_ID) 00:21:40.198 18:19:50 ublk.test_create_multi_ublk -- ublk/ublk.sh@86 -- # rpc_cmd ublk_stop_disk 1 00:21:40.198 18:19:50 ublk.test_create_multi_ublk -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:40.198 18:19:50 ublk.test_create_multi_ublk -- common/autotest_common.sh@10 -- # set +x 00:21:40.198 [2024-12-06 18:19:50.594447] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_STOP_DEV 00:21:40.198 [2024-12-06 18:19:50.626776] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_STOP_DEV completed 00:21:40.198 [2024-12-06 18:19:50.627808] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_DEL_DEV 00:21:40.198 [2024-12-06 18:19:50.637317] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_DEL_DEV completed 00:21:40.198 [2024-12-06 18:19:50.637618] ublk.c: 985:ublk_dev_list_unregister: *DEBUG*: ublk1: remove from tailq 00:21:40.198 [2024-12-06 18:19:50.637638] ublk.c:1819:ublk_free_dev: *NOTICE*: ublk dev 1 stopped 00:21:40.198 18:19:50 ublk.test_create_multi_ublk -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:40.198 18:19:50 ublk.test_create_multi_ublk -- ublk/ublk.sh@85 -- # for i in $(seq 0 $MAX_DEV_ID) 00:21:40.198 18:19:50 ublk.test_create_multi_ublk -- ublk/ublk.sh@86 -- # rpc_cmd ublk_stop_disk 2 00:21:40.198 18:19:50 ublk.test_create_multi_ublk -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:40.198 18:19:50 ublk.test_create_multi_ublk -- common/autotest_common.sh@10 -- # set +x 00:21:40.198 [2024-12-06 18:19:50.653429] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk2: ctrl cmd UBLK_CMD_STOP_DEV 00:21:40.198 [2024-12-06 18:19:50.686778] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk2: ctrl cmd UBLK_CMD_STOP_DEV completed 00:21:40.198 [2024-12-06 18:19:50.687769] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk2: ctrl cmd UBLK_CMD_DEL_DEV 00:21:40.198 [2024-12-06 18:19:50.693326] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk2: ctrl cmd UBLK_CMD_DEL_DEV completed 00:21:40.198 [2024-12-06 18:19:50.693630] ublk.c: 985:ublk_dev_list_unregister: *DEBUG*: ublk2: remove from tailq 00:21:40.198 [2024-12-06 18:19:50.693649] ublk.c:1819:ublk_free_dev: *NOTICE*: ublk dev 2 stopped 00:21:40.198 18:19:50 ublk.test_create_multi_ublk -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:40.198 18:19:50 ublk.test_create_multi_ublk -- ublk/ublk.sh@85 -- # for i in $(seq 0 $MAX_DEV_ID) 00:21:40.198 18:19:50 ublk.test_create_multi_ublk -- ublk/ublk.sh@86 -- # rpc_cmd ublk_stop_disk 3 00:21:40.198 18:19:50 ublk.test_create_multi_ublk -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:40.198 18:19:50 ublk.test_create_multi_ublk -- common/autotest_common.sh@10 -- # set +x 00:21:40.198 [2024-12-06 18:19:50.704432] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk3: ctrl cmd UBLK_CMD_STOP_DEV 00:21:40.198 [2024-12-06 18:19:50.741334] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk3: ctrl cmd UBLK_CMD_STOP_DEV completed 00:21:40.198 [2024-12-06 18:19:50.742089] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk3: ctrl cmd UBLK_CMD_DEL_DEV 00:21:40.199 [2024-12-06 18:19:50.744557] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk3: ctrl cmd UBLK_CMD_DEL_DEV completed 00:21:40.199 [2024-12-06 18:19:50.744830] ublk.c: 985:ublk_dev_list_unregister: *DEBUG*: ublk3: remove from tailq 00:21:40.199 [2024-12-06 18:19:50.744843] ublk.c:1819:ublk_free_dev: *NOTICE*: ublk dev 3 stopped 00:21:40.199 18:19:50 ublk.test_create_multi_ublk -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:40.199 18:19:50 ublk.test_create_multi_ublk -- ublk/ublk.sh@91 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -t 120 ublk_destroy_target 00:21:40.457 [2024-12-06 18:19:50.946431] ublk.c: 835:_ublk_fini: *DEBUG*: finish shutdown 00:21:40.457 [2024-12-06 18:19:50.953304] ublk.c: 766:_ublk_fini_done: *DEBUG*: 00:21:40.457 [2024-12-06 18:19:50.953373] ublk_rpc.c: 63:ublk_destroy_target_done: *NOTICE*: ublk target has been destroyed 00:21:40.457 18:19:50 ublk.test_create_multi_ublk -- ublk/ublk.sh@93 -- # seq 0 3 00:21:40.457 18:19:50 ublk.test_create_multi_ublk -- ublk/ublk.sh@93 -- # for i in $(seq 0 $MAX_DEV_ID) 00:21:40.457 18:19:50 ublk.test_create_multi_ublk -- ublk/ublk.sh@94 -- # rpc_cmd bdev_malloc_delete Malloc0 00:21:40.457 18:19:50 ublk.test_create_multi_ublk -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:40.457 18:19:50 ublk.test_create_multi_ublk -- common/autotest_common.sh@10 -- # set +x 00:21:41.393 18:19:51 ublk.test_create_multi_ublk -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:41.393 18:19:51 ublk.test_create_multi_ublk -- ublk/ublk.sh@93 -- # for i in $(seq 0 $MAX_DEV_ID) 00:21:41.393 18:19:51 ublk.test_create_multi_ublk -- ublk/ublk.sh@94 -- # rpc_cmd bdev_malloc_delete Malloc1 00:21:41.393 18:19:51 ublk.test_create_multi_ublk -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:41.393 18:19:51 ublk.test_create_multi_ublk -- common/autotest_common.sh@10 -- # set +x 00:21:41.652 18:19:52 ublk.test_create_multi_ublk -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:41.652 18:19:52 ublk.test_create_multi_ublk -- ublk/ublk.sh@93 -- # for i in $(seq 0 $MAX_DEV_ID) 00:21:41.652 18:19:52 ublk.test_create_multi_ublk -- ublk/ublk.sh@94 -- # rpc_cmd bdev_malloc_delete Malloc2 00:21:41.652 18:19:52 ublk.test_create_multi_ublk -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:41.652 18:19:52 ublk.test_create_multi_ublk -- common/autotest_common.sh@10 -- # set +x 00:21:41.911 18:19:52 ublk.test_create_multi_ublk -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:41.911 18:19:52 ublk.test_create_multi_ublk -- ublk/ublk.sh@93 -- # for i in $(seq 0 $MAX_DEV_ID) 00:21:41.911 18:19:52 ublk.test_create_multi_ublk -- ublk/ublk.sh@94 -- # rpc_cmd bdev_malloc_delete Malloc3 00:21:41.911 18:19:52 ublk.test_create_multi_ublk -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:41.911 18:19:52 ublk.test_create_multi_ublk -- common/autotest_common.sh@10 -- # set +x 00:21:42.481 18:19:52 ublk.test_create_multi_ublk -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:42.481 18:19:52 ublk.test_create_multi_ublk -- ublk/ublk.sh@96 -- # check_leftover_devices 00:21:42.481 18:19:52 ublk.test_create_multi_ublk -- lvol/common.sh@25 -- # rpc_cmd bdev_get_bdevs 00:21:42.481 18:19:52 ublk.test_create_multi_ublk -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:42.481 18:19:52 ublk.test_create_multi_ublk -- common/autotest_common.sh@10 -- # set +x 00:21:42.481 18:19:52 ublk.test_create_multi_ublk -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:42.481 18:19:52 ublk.test_create_multi_ublk -- lvol/common.sh@25 -- # leftover_bdevs='[]' 00:21:42.481 18:19:52 ublk.test_create_multi_ublk -- lvol/common.sh@26 -- # jq length 00:21:42.481 18:19:52 ublk.test_create_multi_ublk -- lvol/common.sh@26 -- # '[' 0 == 0 ']' 00:21:42.481 18:19:52 ublk.test_create_multi_ublk -- lvol/common.sh@27 -- # rpc_cmd bdev_lvol_get_lvstores 00:21:42.481 18:19:52 ublk.test_create_multi_ublk -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:42.481 18:19:52 ublk.test_create_multi_ublk -- common/autotest_common.sh@10 -- # set +x 00:21:42.481 18:19:52 ublk.test_create_multi_ublk -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:42.481 18:19:52 ublk.test_create_multi_ublk -- lvol/common.sh@27 -- # leftover_lvs='[]' 00:21:42.481 18:19:52 ublk.test_create_multi_ublk -- lvol/common.sh@28 -- # jq length 00:21:42.481 ************************************ 00:21:42.481 END TEST test_create_multi_ublk 00:21:42.481 ************************************ 00:21:42.481 18:19:52 ublk.test_create_multi_ublk -- lvol/common.sh@28 -- # '[' 0 == 0 ']' 00:21:42.481 00:21:42.481 real 0m4.684s 00:21:42.481 user 0m1.053s 00:21:42.481 sys 0m0.241s 00:21:42.481 18:19:52 ublk.test_create_multi_ublk -- common/autotest_common.sh@1130 -- # xtrace_disable 00:21:42.481 18:19:52 ublk.test_create_multi_ublk -- common/autotest_common.sh@10 -- # set +x 00:21:42.481 18:19:53 ublk -- ublk/ublk.sh@146 -- # trap - SIGINT SIGTERM EXIT 00:21:42.481 18:19:53 ublk -- ublk/ublk.sh@147 -- # cleanup 00:21:42.481 18:19:53 ublk -- ublk/ublk.sh@130 -- # killprocess 75346 00:21:42.481 18:19:53 ublk -- common/autotest_common.sh@954 -- # '[' -z 75346 ']' 00:21:42.481 18:19:53 ublk -- common/autotest_common.sh@958 -- # kill -0 75346 00:21:42.481 18:19:53 ublk -- common/autotest_common.sh@959 -- # uname 00:21:42.481 18:19:53 ublk -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:21:42.481 18:19:53 ublk -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 75346 00:21:42.481 killing process with pid 75346 00:21:42.481 18:19:53 ublk -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:21:42.481 18:19:53 ublk -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:21:42.481 18:19:53 ublk -- common/autotest_common.sh@972 -- # echo 'killing process with pid 75346' 00:21:42.481 18:19:53 ublk -- common/autotest_common.sh@973 -- # kill 75346 00:21:42.481 18:19:53 ublk -- common/autotest_common.sh@978 -- # wait 75346 00:21:43.860 [2024-12-06 18:19:54.210747] ublk.c: 835:_ublk_fini: *DEBUG*: finish shutdown 00:21:43.860 [2024-12-06 18:19:54.210805] ublk.c: 766:_ublk_fini_done: *DEBUG*: 00:21:45.283 00:21:45.283 real 0m30.931s 00:21:45.283 user 0m44.751s 00:21:45.283 sys 0m10.139s 00:21:45.283 18:19:55 ublk -- common/autotest_common.sh@1130 -- # xtrace_disable 00:21:45.283 ************************************ 00:21:45.283 END TEST ublk 00:21:45.283 ************************************ 00:21:45.283 18:19:55 ublk -- common/autotest_common.sh@10 -- # set +x 00:21:45.283 18:19:55 -- spdk/autotest.sh@248 -- # run_test ublk_recovery /home/vagrant/spdk_repo/spdk/test/ublk/ublk_recovery.sh 00:21:45.283 18:19:55 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:21:45.283 18:19:55 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:21:45.283 18:19:55 -- common/autotest_common.sh@10 -- # set +x 00:21:45.283 ************************************ 00:21:45.283 START TEST ublk_recovery 00:21:45.283 ************************************ 00:21:45.283 18:19:55 ublk_recovery -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/ublk/ublk_recovery.sh 00:21:45.283 * Looking for test storage... 00:21:45.283 * Found test storage at /home/vagrant/spdk_repo/spdk/test/ublk 00:21:45.283 18:19:55 ublk_recovery -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:21:45.283 18:19:55 ublk_recovery -- common/autotest_common.sh@1711 -- # lcov --version 00:21:45.283 18:19:55 ublk_recovery -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:21:45.283 18:19:55 ublk_recovery -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:21:45.283 18:19:55 ublk_recovery -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:21:45.283 18:19:55 ublk_recovery -- scripts/common.sh@333 -- # local ver1 ver1_l 00:21:45.283 18:19:55 ublk_recovery -- scripts/common.sh@334 -- # local ver2 ver2_l 00:21:45.283 18:19:55 ublk_recovery -- scripts/common.sh@336 -- # IFS=.-: 00:21:45.283 18:19:55 ublk_recovery -- scripts/common.sh@336 -- # read -ra ver1 00:21:45.283 18:19:55 ublk_recovery -- scripts/common.sh@337 -- # IFS=.-: 00:21:45.283 18:19:55 ublk_recovery -- scripts/common.sh@337 -- # read -ra ver2 00:21:45.283 18:19:55 ublk_recovery -- scripts/common.sh@338 -- # local 'op=<' 00:21:45.283 18:19:55 ublk_recovery -- scripts/common.sh@340 -- # ver1_l=2 00:21:45.283 18:19:55 ublk_recovery -- scripts/common.sh@341 -- # ver2_l=1 00:21:45.283 18:19:55 ublk_recovery -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:21:45.283 18:19:55 ublk_recovery -- scripts/common.sh@344 -- # case "$op" in 00:21:45.283 18:19:55 ublk_recovery -- scripts/common.sh@345 -- # : 1 00:21:45.283 18:19:55 ublk_recovery -- scripts/common.sh@364 -- # (( v = 0 )) 00:21:45.283 18:19:55 ublk_recovery -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:21:45.283 18:19:55 ublk_recovery -- scripts/common.sh@365 -- # decimal 1 00:21:45.283 18:19:55 ublk_recovery -- scripts/common.sh@353 -- # local d=1 00:21:45.283 18:19:55 ublk_recovery -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:21:45.283 18:19:55 ublk_recovery -- scripts/common.sh@355 -- # echo 1 00:21:45.283 18:19:55 ublk_recovery -- scripts/common.sh@365 -- # ver1[v]=1 00:21:45.283 18:19:55 ublk_recovery -- scripts/common.sh@366 -- # decimal 2 00:21:45.283 18:19:55 ublk_recovery -- scripts/common.sh@353 -- # local d=2 00:21:45.283 18:19:55 ublk_recovery -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:21:45.283 18:19:55 ublk_recovery -- scripts/common.sh@355 -- # echo 2 00:21:45.283 18:19:55 ublk_recovery -- scripts/common.sh@366 -- # ver2[v]=2 00:21:45.283 18:19:55 ublk_recovery -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:21:45.283 18:19:55 ublk_recovery -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:21:45.283 18:19:55 ublk_recovery -- scripts/common.sh@368 -- # return 0 00:21:45.283 18:19:55 ublk_recovery -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:21:45.283 18:19:55 ublk_recovery -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:21:45.283 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:21:45.283 --rc genhtml_branch_coverage=1 00:21:45.283 --rc genhtml_function_coverage=1 00:21:45.283 --rc genhtml_legend=1 00:21:45.283 --rc geninfo_all_blocks=1 00:21:45.283 --rc geninfo_unexecuted_blocks=1 00:21:45.283 00:21:45.283 ' 00:21:45.283 18:19:55 ublk_recovery -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:21:45.283 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:21:45.283 --rc genhtml_branch_coverage=1 00:21:45.284 --rc genhtml_function_coverage=1 00:21:45.284 --rc genhtml_legend=1 00:21:45.284 --rc geninfo_all_blocks=1 00:21:45.284 --rc geninfo_unexecuted_blocks=1 00:21:45.284 00:21:45.284 ' 00:21:45.284 18:19:55 ublk_recovery -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:21:45.284 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:21:45.284 --rc genhtml_branch_coverage=1 00:21:45.284 --rc genhtml_function_coverage=1 00:21:45.284 --rc genhtml_legend=1 00:21:45.284 --rc geninfo_all_blocks=1 00:21:45.284 --rc geninfo_unexecuted_blocks=1 00:21:45.284 00:21:45.284 ' 00:21:45.284 18:19:55 ublk_recovery -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:21:45.284 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:21:45.284 --rc genhtml_branch_coverage=1 00:21:45.284 --rc genhtml_function_coverage=1 00:21:45.284 --rc genhtml_legend=1 00:21:45.284 --rc geninfo_all_blocks=1 00:21:45.284 --rc geninfo_unexecuted_blocks=1 00:21:45.284 00:21:45.284 ' 00:21:45.284 18:19:55 ublk_recovery -- ublk/ublk_recovery.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/lvol/common.sh 00:21:45.284 18:19:55 ublk_recovery -- lvol/common.sh@6 -- # MALLOC_SIZE_MB=128 00:21:45.284 18:19:55 ublk_recovery -- lvol/common.sh@7 -- # MALLOC_BS=512 00:21:45.284 18:19:55 ublk_recovery -- lvol/common.sh@8 -- # AIO_SIZE_MB=400 00:21:45.284 18:19:55 ublk_recovery -- lvol/common.sh@9 -- # AIO_BS=4096 00:21:45.284 18:19:55 ublk_recovery -- lvol/common.sh@10 -- # LVS_DEFAULT_CLUSTER_SIZE_MB=4 00:21:45.284 18:19:55 ublk_recovery -- lvol/common.sh@11 -- # LVS_DEFAULT_CLUSTER_SIZE=4194304 00:21:45.284 18:19:55 ublk_recovery -- lvol/common.sh@13 -- # LVS_DEFAULT_CAPACITY_MB=124 00:21:45.284 18:19:55 ublk_recovery -- lvol/common.sh@14 -- # LVS_DEFAULT_CAPACITY=130023424 00:21:45.284 18:19:55 ublk_recovery -- ublk/ublk_recovery.sh@11 -- # modprobe ublk_drv 00:21:45.284 18:19:55 ublk_recovery -- ublk/ublk_recovery.sh@19 -- # spdk_pid=75774 00:21:45.284 18:19:55 ublk_recovery -- ublk/ublk_recovery.sh@18 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x3 -L ublk 00:21:45.284 18:19:55 ublk_recovery -- ublk/ublk_recovery.sh@20 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:21:45.284 18:19:55 ublk_recovery -- ublk/ublk_recovery.sh@21 -- # waitforlisten 75774 00:21:45.284 18:19:55 ublk_recovery -- common/autotest_common.sh@835 -- # '[' -z 75774 ']' 00:21:45.284 18:19:55 ublk_recovery -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:21:45.284 18:19:55 ublk_recovery -- common/autotest_common.sh@840 -- # local max_retries=100 00:21:45.284 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:21:45.284 18:19:55 ublk_recovery -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:21:45.284 18:19:55 ublk_recovery -- common/autotest_common.sh@844 -- # xtrace_disable 00:21:45.284 18:19:55 ublk_recovery -- common/autotest_common.sh@10 -- # set +x 00:21:45.542 [2024-12-06 18:19:55.925395] Starting SPDK v25.01-pre git sha1 1148849d6 / DPDK 24.03.0 initialization... 00:21:45.543 [2024-12-06 18:19:55.925530] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid75774 ] 00:21:45.543 [2024-12-06 18:19:56.099985] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:21:45.801 [2024-12-06 18:19:56.217817] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:21:45.801 [2024-12-06 18:19:56.217855] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:21:46.736 18:19:57 ublk_recovery -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:21:46.736 18:19:57 ublk_recovery -- common/autotest_common.sh@868 -- # return 0 00:21:46.736 18:19:57 ublk_recovery -- ublk/ublk_recovery.sh@23 -- # rpc_cmd ublk_create_target 00:21:46.736 18:19:57 ublk_recovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:46.736 18:19:57 ublk_recovery -- common/autotest_common.sh@10 -- # set +x 00:21:46.736 [2024-12-06 18:19:57.101288] ublk.c: 572:ublk_ctrl_cmd_get_features: *NOTICE*: User Copy enabled 00:21:46.736 [2024-12-06 18:19:57.103973] ublk.c: 758:ublk_create_target: *NOTICE*: UBLK target created successfully 00:21:46.736 18:19:57 ublk_recovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:46.736 18:19:57 ublk_recovery -- ublk/ublk_recovery.sh@24 -- # rpc_cmd bdev_malloc_create -b malloc0 64 4096 00:21:46.736 18:19:57 ublk_recovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:46.736 18:19:57 ublk_recovery -- common/autotest_common.sh@10 -- # set +x 00:21:46.736 malloc0 00:21:46.736 18:19:57 ublk_recovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:46.736 18:19:57 ublk_recovery -- ublk/ublk_recovery.sh@25 -- # rpc_cmd ublk_start_disk malloc0 1 -q 2 -d 128 00:21:46.736 18:19:57 ublk_recovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:46.737 18:19:57 ublk_recovery -- common/autotest_common.sh@10 -- # set +x 00:21:46.737 [2024-12-06 18:19:57.253463] ublk.c:1924:ublk_start_disk: *DEBUG*: ublk1: bdev malloc0 num_queues 2 queue_depth 128 00:21:46.737 [2024-12-06 18:19:57.253589] ublk.c:1965:ublk_start_disk: *INFO*: Enabling kernel access to bdev malloc0 via ublk 1 00:21:46.737 [2024-12-06 18:19:57.253604] ublk.c: 971:ublk_dev_list_register: *DEBUG*: ublk1: add to tailq 00:21:46.737 [2024-12-06 18:19:57.253614] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_ADD_DEV 00:21:46.737 [2024-12-06 18:19:57.262415] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_ADD_DEV completed 00:21:46.737 [2024-12-06 18:19:57.262452] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_SET_PARAMS 00:21:46.737 [2024-12-06 18:19:57.269310] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_SET_PARAMS completed 00:21:46.737 [2024-12-06 18:19:57.269475] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_START_DEV 00:21:46.737 [2024-12-06 18:19:57.284316] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_START_DEV completed 00:21:46.737 1 00:21:46.737 18:19:57 ublk_recovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:46.737 18:19:57 ublk_recovery -- ublk/ublk_recovery.sh@27 -- # sleep 1 00:21:48.117 18:19:58 ublk_recovery -- ublk/ublk_recovery.sh@31 -- # fio_proc=75809 00:21:48.117 18:19:58 ublk_recovery -- ublk/ublk_recovery.sh@30 -- # taskset -c 2-3 fio --name=fio_test --filename=/dev/ublkb1 --numjobs=1 --iodepth=128 --ioengine=libaio --rw=randrw --direct=1 --time_based --runtime=60 00:21:48.117 18:19:58 ublk_recovery -- ublk/ublk_recovery.sh@33 -- # sleep 5 00:21:48.117 fio_test: (g=0): rw=randrw, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:21:48.117 fio-3.35 00:21:48.117 Starting 1 process 00:21:53.434 18:20:03 ublk_recovery -- ublk/ublk_recovery.sh@36 -- # kill -9 75774 00:21:53.434 18:20:03 ublk_recovery -- ublk/ublk_recovery.sh@38 -- # sleep 5 00:21:58.703 /home/vagrant/spdk_repo/spdk/test/ublk/ublk_recovery.sh: line 38: 75774 Killed "$SPDK_BIN_DIR/spdk_tgt" -m 0x3 -L ublk 00:21:58.703 18:20:08 ublk_recovery -- ublk/ublk_recovery.sh@42 -- # spdk_pid=75922 00:21:58.703 18:20:08 ublk_recovery -- ublk/ublk_recovery.sh@41 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x3 -L ublk 00:21:58.703 18:20:08 ublk_recovery -- ublk/ublk_recovery.sh@43 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:21:58.703 18:20:08 ublk_recovery -- ublk/ublk_recovery.sh@44 -- # waitforlisten 75922 00:21:58.703 18:20:08 ublk_recovery -- common/autotest_common.sh@835 -- # '[' -z 75922 ']' 00:21:58.703 18:20:08 ublk_recovery -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:21:58.703 18:20:08 ublk_recovery -- common/autotest_common.sh@840 -- # local max_retries=100 00:21:58.703 18:20:08 ublk_recovery -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:21:58.703 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:21:58.703 18:20:08 ublk_recovery -- common/autotest_common.sh@844 -- # xtrace_disable 00:21:58.703 18:20:08 ublk_recovery -- common/autotest_common.sh@10 -- # set +x 00:21:58.703 [2024-12-06 18:20:08.422739] Starting SPDK v25.01-pre git sha1 1148849d6 / DPDK 24.03.0 initialization... 00:21:58.703 [2024-12-06 18:20:08.422866] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid75922 ] 00:21:58.703 [2024-12-06 18:20:08.607339] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:21:58.703 [2024-12-06 18:20:08.729038] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:21:58.703 [2024-12-06 18:20:08.729075] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:21:59.269 18:20:09 ublk_recovery -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:21:59.269 18:20:09 ublk_recovery -- common/autotest_common.sh@868 -- # return 0 00:21:59.269 18:20:09 ublk_recovery -- ublk/ublk_recovery.sh@47 -- # rpc_cmd ublk_create_target 00:21:59.269 18:20:09 ublk_recovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:59.269 18:20:09 ublk_recovery -- common/autotest_common.sh@10 -- # set +x 00:21:59.269 [2024-12-06 18:20:09.637287] ublk.c: 572:ublk_ctrl_cmd_get_features: *NOTICE*: User Copy enabled 00:21:59.269 [2024-12-06 18:20:09.640089] ublk.c: 758:ublk_create_target: *NOTICE*: UBLK target created successfully 00:21:59.269 18:20:09 ublk_recovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:59.269 18:20:09 ublk_recovery -- ublk/ublk_recovery.sh@48 -- # rpc_cmd bdev_malloc_create -b malloc0 64 4096 00:21:59.269 18:20:09 ublk_recovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:59.269 18:20:09 ublk_recovery -- common/autotest_common.sh@10 -- # set +x 00:21:59.269 malloc0 00:21:59.269 18:20:09 ublk_recovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:59.269 18:20:09 ublk_recovery -- ublk/ublk_recovery.sh@49 -- # rpc_cmd ublk_recover_disk malloc0 1 00:21:59.269 18:20:09 ublk_recovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:59.269 18:20:09 ublk_recovery -- common/autotest_common.sh@10 -- # set +x 00:21:59.269 [2024-12-06 18:20:09.789449] ublk.c:2106:ublk_start_disk_recovery: *NOTICE*: Recovering ublk 1 with bdev malloc0 00:21:59.269 [2024-12-06 18:20:09.789499] ublk.c: 971:ublk_dev_list_register: *DEBUG*: ublk1: add to tailq 00:21:59.269 [2024-12-06 18:20:09.789511] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_GET_DEV_INFO 00:21:59.269 [2024-12-06 18:20:09.797336] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_GET_DEV_INFO completed 00:21:59.269 [2024-12-06 18:20:09.797365] ublk.c: 391:ublk_ctrl_process_cqe: *DEBUG*: ublk1: Ublk 1 device state 2 00:21:59.269 [2024-12-06 18:20:09.797388] ublk.c:2035:ublk_ctrl_start_recovery: *DEBUG*: Recovering ublk 1, num queues 2, queue depth 128, flags 0xda 00:21:59.269 [2024-12-06 18:20:09.797482] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_START_USER_RECOVERY 00:21:59.269 1 00:21:59.269 18:20:09 ublk_recovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:59.269 18:20:09 ublk_recovery -- ublk/ublk_recovery.sh@52 -- # wait 75809 00:21:59.269 [2024-12-06 18:20:09.805301] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_START_USER_RECOVERY completed 00:21:59.269 [2024-12-06 18:20:09.811781] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_END_USER_RECOVERY 00:21:59.269 [2024-12-06 18:20:09.819508] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_END_USER_RECOVERY completed 00:21:59.269 [2024-12-06 18:20:09.819538] ublk.c: 413:ublk_ctrl_process_cqe: *NOTICE*: Ublk 1 recover done successfully 00:22:55.501 00:22:55.501 fio_test: (groupid=0, jobs=1): err= 0: pid=75818: Fri Dec 6 18:20:58 2024 00:22:55.501 read: IOPS=21.7k, BW=84.7MiB/s (88.8MB/s)(5081MiB/60002msec) 00:22:55.501 slat (nsec): min=1891, max=1272.3k, avg=7339.90, stdev=2662.47 00:22:55.501 clat (usec): min=963, max=6525.7k, avg=2910.28, stdev=46108.93 00:22:55.501 lat (usec): min=968, max=6525.7k, avg=2917.62, stdev=46108.94 00:22:55.501 clat percentiles (usec): 00:22:55.501 | 1.00th=[ 1975], 5.00th=[ 2180], 10.00th=[ 2245], 20.00th=[ 2311], 00:22:55.501 | 30.00th=[ 2343], 40.00th=[ 2376], 50.00th=[ 2409], 60.00th=[ 2442], 00:22:55.501 | 70.00th=[ 2474], 80.00th=[ 2606], 90.00th=[ 3163], 95.00th=[ 3818], 00:22:55.501 | 99.00th=[ 5080], 99.50th=[ 5604], 99.90th=[ 6456], 99.95th=[ 7242], 00:22:55.501 | 99.99th=[12780] 00:22:55.501 bw ( KiB/s): min=21848, max=104184, per=100.00%, avg=96406.19, stdev=11183.22, samples=107 00:22:55.501 iops : min= 5462, max=26046, avg=24101.53, stdev=2795.80, samples=107 00:22:55.501 write: IOPS=21.7k, BW=84.6MiB/s (88.7MB/s)(5075MiB/60002msec); 0 zone resets 00:22:55.501 slat (usec): min=2, max=1022, avg= 7.38, stdev= 2.85 00:22:55.501 clat (usec): min=825, max=6526.3k, avg=2982.01, stdev=45421.90 00:22:55.501 lat (usec): min=831, max=6526.3k, avg=2989.39, stdev=45421.91 00:22:55.501 clat percentiles (usec): 00:22:55.501 | 1.00th=[ 1991], 5.00th=[ 2180], 10.00th=[ 2311], 20.00th=[ 2409], 00:22:55.501 | 30.00th=[ 2442], 40.00th=[ 2474], 50.00th=[ 2507], 60.00th=[ 2540], 00:22:55.501 | 70.00th=[ 2606], 80.00th=[ 2704], 90.00th=[ 3261], 95.00th=[ 3785], 00:22:55.501 | 99.00th=[ 5080], 99.50th=[ 5669], 99.90th=[ 6587], 99.95th=[ 7308], 00:22:55.501 | 99.99th=[12911] 00:22:55.501 bw ( KiB/s): min=22720, max=104272, per=100.00%, avg=96306.91, stdev=11057.48, samples=107 00:22:55.501 iops : min= 5680, max=26068, avg=24076.68, stdev=2764.38, samples=107 00:22:55.501 lat (usec) : 1000=0.01% 00:22:55.501 lat (msec) : 2=1.14%, 4=94.68%, 10=4.16%, 20=0.01%, >=2000=0.01% 00:22:55.501 cpu : usr=12.02%, sys=32.07%, ctx=111145, majf=0, minf=13 00:22:55.501 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.1%, >=64=100.0% 00:22:55.501 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:22:55.501 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:22:55.502 issued rwts: total=1300637,1299103,0,0 short=0,0,0,0 dropped=0,0,0,0 00:22:55.502 latency : target=0, window=0, percentile=100.00%, depth=128 00:22:55.502 00:22:55.502 Run status group 0 (all jobs): 00:22:55.502 READ: bw=84.7MiB/s (88.8MB/s), 84.7MiB/s-84.7MiB/s (88.8MB/s-88.8MB/s), io=5081MiB (5327MB), run=60002-60002msec 00:22:55.502 WRITE: bw=84.6MiB/s (88.7MB/s), 84.6MiB/s-84.6MiB/s (88.7MB/s-88.7MB/s), io=5075MiB (5321MB), run=60002-60002msec 00:22:55.502 00:22:55.502 Disk stats (read/write): 00:22:55.502 ublkb1: ios=1297797/1296319, merge=0/0, ticks=3666966/3623266, in_queue=7290232, util=99.95% 00:22:55.502 18:20:58 ublk_recovery -- ublk/ublk_recovery.sh@55 -- # rpc_cmd ublk_stop_disk 1 00:22:55.502 18:20:58 ublk_recovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:55.502 18:20:58 ublk_recovery -- common/autotest_common.sh@10 -- # set +x 00:22:55.502 [2024-12-06 18:20:58.576416] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_STOP_DEV 00:22:55.502 [2024-12-06 18:20:58.618288] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_STOP_DEV completed 00:22:55.502 [2024-12-06 18:20:58.618508] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_DEL_DEV 00:22:55.502 [2024-12-06 18:20:58.626314] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_DEL_DEV completed 00:22:55.502 [2024-12-06 18:20:58.626594] ublk.c: 985:ublk_dev_list_unregister: *DEBUG*: ublk1: remove from tailq 00:22:55.502 [2024-12-06 18:20:58.626686] ublk.c:1819:ublk_free_dev: *NOTICE*: ublk dev 1 stopped 00:22:55.502 18:20:58 ublk_recovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:55.502 18:20:58 ublk_recovery -- ublk/ublk_recovery.sh@56 -- # rpc_cmd ublk_destroy_target 00:22:55.502 18:20:58 ublk_recovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:55.502 18:20:58 ublk_recovery -- common/autotest_common.sh@10 -- # set +x 00:22:55.502 [2024-12-06 18:20:58.641416] ublk.c: 835:_ublk_fini: *DEBUG*: finish shutdown 00:22:55.502 [2024-12-06 18:20:58.649283] ublk.c: 766:_ublk_fini_done: *DEBUG*: 00:22:55.502 [2024-12-06 18:20:58.649319] ublk_rpc.c: 63:ublk_destroy_target_done: *NOTICE*: ublk target has been destroyed 00:22:55.502 18:20:58 ublk_recovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:55.502 18:20:58 ublk_recovery -- ublk/ublk_recovery.sh@58 -- # trap - SIGINT SIGTERM EXIT 00:22:55.502 18:20:58 ublk_recovery -- ublk/ublk_recovery.sh@59 -- # cleanup 00:22:55.502 18:20:58 ublk_recovery -- ublk/ublk_recovery.sh@14 -- # killprocess 75922 00:22:55.502 18:20:58 ublk_recovery -- common/autotest_common.sh@954 -- # '[' -z 75922 ']' 00:22:55.502 18:20:58 ublk_recovery -- common/autotest_common.sh@958 -- # kill -0 75922 00:22:55.502 18:20:58 ublk_recovery -- common/autotest_common.sh@959 -- # uname 00:22:55.502 18:20:58 ublk_recovery -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:22:55.502 18:20:58 ublk_recovery -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 75922 00:22:55.502 killing process with pid 75922 00:22:55.502 18:20:58 ublk_recovery -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:22:55.502 18:20:58 ublk_recovery -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:22:55.502 18:20:58 ublk_recovery -- common/autotest_common.sh@972 -- # echo 'killing process with pid 75922' 00:22:55.502 18:20:58 ublk_recovery -- common/autotest_common.sh@973 -- # kill 75922 00:22:55.502 18:20:58 ublk_recovery -- common/autotest_common.sh@978 -- # wait 75922 00:22:55.502 [2024-12-06 18:21:00.304273] ublk.c: 835:_ublk_fini: *DEBUG*: finish shutdown 00:22:55.502 [2024-12-06 18:21:00.304474] ublk.c: 766:_ublk_fini_done: *DEBUG*: 00:22:55.502 00:22:55.502 real 1m6.147s 00:22:55.502 user 1m49.081s 00:22:55.502 sys 0m38.826s 00:22:55.502 18:21:01 ublk_recovery -- common/autotest_common.sh@1130 -- # xtrace_disable 00:22:55.502 ************************************ 00:22:55.502 END TEST ublk_recovery 00:22:55.502 ************************************ 00:22:55.502 18:21:01 ublk_recovery -- common/autotest_common.sh@10 -- # set +x 00:22:55.502 18:21:01 -- spdk/autotest.sh@251 -- # [[ 0 -eq 1 ]] 00:22:55.502 18:21:01 -- spdk/autotest.sh@256 -- # '[' 0 -eq 1 ']' 00:22:55.502 18:21:01 -- spdk/autotest.sh@260 -- # timing_exit lib 00:22:55.502 18:21:01 -- common/autotest_common.sh@732 -- # xtrace_disable 00:22:55.502 18:21:01 -- common/autotest_common.sh@10 -- # set +x 00:22:55.502 18:21:01 -- spdk/autotest.sh@262 -- # '[' 0 -eq 1 ']' 00:22:55.502 18:21:01 -- spdk/autotest.sh@267 -- # '[' 0 -eq 1 ']' 00:22:55.502 18:21:01 -- spdk/autotest.sh@276 -- # '[' 0 -eq 1 ']' 00:22:55.502 18:21:01 -- spdk/autotest.sh@311 -- # '[' 0 -eq 1 ']' 00:22:55.502 18:21:01 -- spdk/autotest.sh@315 -- # '[' 0 -eq 1 ']' 00:22:55.502 18:21:01 -- spdk/autotest.sh@319 -- # '[' 0 -eq 1 ']' 00:22:55.502 18:21:01 -- spdk/autotest.sh@324 -- # '[' 0 -eq 1 ']' 00:22:55.502 18:21:01 -- spdk/autotest.sh@333 -- # '[' 0 -eq 1 ']' 00:22:55.502 18:21:01 -- spdk/autotest.sh@338 -- # '[' 0 -eq 1 ']' 00:22:55.502 18:21:01 -- spdk/autotest.sh@342 -- # '[' 1 -eq 1 ']' 00:22:55.502 18:21:01 -- spdk/autotest.sh@343 -- # run_test ftl /home/vagrant/spdk_repo/spdk/test/ftl/ftl.sh 00:22:55.502 18:21:01 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:22:55.502 18:21:01 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:22:55.502 18:21:01 -- common/autotest_common.sh@10 -- # set +x 00:22:55.502 ************************************ 00:22:55.502 START TEST ftl 00:22:55.502 ************************************ 00:22:55.502 18:21:01 ftl -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/ftl/ftl.sh 00:22:55.502 * Looking for test storage... 00:22:55.502 * Found test storage at /home/vagrant/spdk_repo/spdk/test/ftl 00:22:55.502 18:21:01 ftl -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:22:55.502 18:21:01 ftl -- common/autotest_common.sh@1711 -- # lcov --version 00:22:55.502 18:21:01 ftl -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:22:55.502 18:21:02 ftl -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:22:55.502 18:21:02 ftl -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:22:55.502 18:21:02 ftl -- scripts/common.sh@333 -- # local ver1 ver1_l 00:22:55.502 18:21:02 ftl -- scripts/common.sh@334 -- # local ver2 ver2_l 00:22:55.502 18:21:02 ftl -- scripts/common.sh@336 -- # IFS=.-: 00:22:55.502 18:21:02 ftl -- scripts/common.sh@336 -- # read -ra ver1 00:22:55.502 18:21:02 ftl -- scripts/common.sh@337 -- # IFS=.-: 00:22:55.502 18:21:02 ftl -- scripts/common.sh@337 -- # read -ra ver2 00:22:55.502 18:21:02 ftl -- scripts/common.sh@338 -- # local 'op=<' 00:22:55.502 18:21:02 ftl -- scripts/common.sh@340 -- # ver1_l=2 00:22:55.502 18:21:02 ftl -- scripts/common.sh@341 -- # ver2_l=1 00:22:55.502 18:21:02 ftl -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:22:55.502 18:21:02 ftl -- scripts/common.sh@344 -- # case "$op" in 00:22:55.502 18:21:02 ftl -- scripts/common.sh@345 -- # : 1 00:22:55.502 18:21:02 ftl -- scripts/common.sh@364 -- # (( v = 0 )) 00:22:55.502 18:21:02 ftl -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:22:55.502 18:21:02 ftl -- scripts/common.sh@365 -- # decimal 1 00:22:55.502 18:21:02 ftl -- scripts/common.sh@353 -- # local d=1 00:22:55.502 18:21:02 ftl -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:22:55.502 18:21:02 ftl -- scripts/common.sh@355 -- # echo 1 00:22:55.502 18:21:02 ftl -- scripts/common.sh@365 -- # ver1[v]=1 00:22:55.502 18:21:02 ftl -- scripts/common.sh@366 -- # decimal 2 00:22:55.502 18:21:02 ftl -- scripts/common.sh@353 -- # local d=2 00:22:55.502 18:21:02 ftl -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:22:55.502 18:21:02 ftl -- scripts/common.sh@355 -- # echo 2 00:22:55.502 18:21:02 ftl -- scripts/common.sh@366 -- # ver2[v]=2 00:22:55.502 18:21:02 ftl -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:22:55.502 18:21:02 ftl -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:22:55.502 18:21:02 ftl -- scripts/common.sh@368 -- # return 0 00:22:55.502 18:21:02 ftl -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:22:55.502 18:21:02 ftl -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:22:55.502 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:22:55.502 --rc genhtml_branch_coverage=1 00:22:55.502 --rc genhtml_function_coverage=1 00:22:55.502 --rc genhtml_legend=1 00:22:55.502 --rc geninfo_all_blocks=1 00:22:55.502 --rc geninfo_unexecuted_blocks=1 00:22:55.502 00:22:55.502 ' 00:22:55.502 18:21:02 ftl -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:22:55.502 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:22:55.502 --rc genhtml_branch_coverage=1 00:22:55.502 --rc genhtml_function_coverage=1 00:22:55.502 --rc genhtml_legend=1 00:22:55.502 --rc geninfo_all_blocks=1 00:22:55.502 --rc geninfo_unexecuted_blocks=1 00:22:55.502 00:22:55.502 ' 00:22:55.502 18:21:02 ftl -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:22:55.502 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:22:55.502 --rc genhtml_branch_coverage=1 00:22:55.502 --rc genhtml_function_coverage=1 00:22:55.502 --rc genhtml_legend=1 00:22:55.502 --rc geninfo_all_blocks=1 00:22:55.502 --rc geninfo_unexecuted_blocks=1 00:22:55.502 00:22:55.502 ' 00:22:55.502 18:21:02 ftl -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:22:55.502 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:22:55.502 --rc genhtml_branch_coverage=1 00:22:55.502 --rc genhtml_function_coverage=1 00:22:55.502 --rc genhtml_legend=1 00:22:55.502 --rc geninfo_all_blocks=1 00:22:55.502 --rc geninfo_unexecuted_blocks=1 00:22:55.502 00:22:55.502 ' 00:22:55.502 18:21:02 ftl -- ftl/ftl.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/ftl/common.sh 00:22:55.502 18:21:02 ftl -- ftl/common.sh@8 -- # dirname /home/vagrant/spdk_repo/spdk/test/ftl/ftl.sh 00:22:55.502 18:21:02 ftl -- ftl/common.sh@8 -- # readlink -f /home/vagrant/spdk_repo/spdk/test/ftl 00:22:55.502 18:21:02 ftl -- ftl/common.sh@8 -- # testdir=/home/vagrant/spdk_repo/spdk/test/ftl 00:22:55.502 18:21:02 ftl -- ftl/common.sh@9 -- # readlink -f /home/vagrant/spdk_repo/spdk/test/ftl/../.. 00:22:55.502 18:21:02 ftl -- ftl/common.sh@9 -- # rootdir=/home/vagrant/spdk_repo/spdk 00:22:55.502 18:21:02 ftl -- ftl/common.sh@10 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:22:55.502 18:21:02 ftl -- ftl/common.sh@12 -- # export 'ftl_tgt_core_mask=[0]' 00:22:55.502 18:21:02 ftl -- ftl/common.sh@12 -- # ftl_tgt_core_mask='[0]' 00:22:55.502 18:21:02 ftl -- ftl/common.sh@14 -- # export spdk_tgt_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:22:55.502 18:21:02 ftl -- ftl/common.sh@14 -- # spdk_tgt_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:22:55.502 18:21:02 ftl -- ftl/common.sh@15 -- # export 'spdk_tgt_cpumask=[0]' 00:22:55.502 18:21:02 ftl -- ftl/common.sh@15 -- # spdk_tgt_cpumask='[0]' 00:22:55.502 18:21:02 ftl -- ftl/common.sh@16 -- # export spdk_tgt_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json 00:22:55.502 18:21:02 ftl -- ftl/common.sh@16 -- # spdk_tgt_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json 00:22:55.503 18:21:02 ftl -- ftl/common.sh@17 -- # export spdk_tgt_pid= 00:22:55.503 18:21:02 ftl -- ftl/common.sh@17 -- # spdk_tgt_pid= 00:22:55.503 18:21:02 ftl -- ftl/common.sh@19 -- # export spdk_ini_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:22:55.503 18:21:02 ftl -- ftl/common.sh@19 -- # spdk_ini_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:22:55.503 18:21:02 ftl -- ftl/common.sh@20 -- # export 'spdk_ini_cpumask=[1]' 00:22:55.503 18:21:02 ftl -- ftl/common.sh@20 -- # spdk_ini_cpumask='[1]' 00:22:55.503 18:21:02 ftl -- ftl/common.sh@21 -- # export spdk_ini_rpc=/var/tmp/spdk.tgt.sock 00:22:55.503 18:21:02 ftl -- ftl/common.sh@21 -- # spdk_ini_rpc=/var/tmp/spdk.tgt.sock 00:22:55.503 18:21:02 ftl -- ftl/common.sh@22 -- # export spdk_ini_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json 00:22:55.503 18:21:02 ftl -- ftl/common.sh@22 -- # spdk_ini_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json 00:22:55.503 18:21:02 ftl -- ftl/common.sh@23 -- # export spdk_ini_pid= 00:22:55.503 18:21:02 ftl -- ftl/common.sh@23 -- # spdk_ini_pid= 00:22:55.503 18:21:02 ftl -- ftl/common.sh@25 -- # export spdk_dd_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:22:55.503 18:21:02 ftl -- ftl/common.sh@25 -- # spdk_dd_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:22:55.503 18:21:02 ftl -- ftl/ftl.sh@11 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:22:55.503 18:21:02 ftl -- ftl/ftl.sh@31 -- # trap at_ftl_exit SIGINT SIGTERM EXIT 00:22:55.503 18:21:02 ftl -- ftl/ftl.sh@34 -- # PCI_ALLOWED= 00:22:55.503 18:21:02 ftl -- ftl/ftl.sh@34 -- # PCI_BLOCKED= 00:22:55.503 18:21:02 ftl -- ftl/ftl.sh@34 -- # DRIVER_OVERRIDE= 00:22:55.503 18:21:02 ftl -- ftl/ftl.sh@34 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:22:55.503 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:22:55.503 0000:00:11.0 (1b36 0010): Already using the uio_pci_generic driver 00:22:55.503 0000:00:10.0 (1b36 0010): Already using the uio_pci_generic driver 00:22:55.503 0000:00:12.0 (1b36 0010): Already using the uio_pci_generic driver 00:22:55.503 0000:00:13.0 (1b36 0010): Already using the uio_pci_generic driver 00:22:55.503 18:21:02 ftl -- ftl/ftl.sh@36 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt --wait-for-rpc 00:22:55.503 18:21:02 ftl -- ftl/ftl.sh@37 -- # spdk_tgt_pid=76729 00:22:55.503 18:21:02 ftl -- ftl/ftl.sh@38 -- # waitforlisten 76729 00:22:55.503 18:21:02 ftl -- common/autotest_common.sh@835 -- # '[' -z 76729 ']' 00:22:55.503 18:21:02 ftl -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:22:55.503 18:21:02 ftl -- common/autotest_common.sh@840 -- # local max_retries=100 00:22:55.503 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:22:55.503 18:21:02 ftl -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:22:55.503 18:21:02 ftl -- common/autotest_common.sh@844 -- # xtrace_disable 00:22:55.503 18:21:02 ftl -- common/autotest_common.sh@10 -- # set +x 00:22:55.503 [2024-12-06 18:21:03.049537] Starting SPDK v25.01-pre git sha1 1148849d6 / DPDK 24.03.0 initialization... 00:22:55.503 [2024-12-06 18:21:03.049663] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid76729 ] 00:22:55.503 [2024-12-06 18:21:03.232243] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:22:55.503 [2024-12-06 18:21:03.342865] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:22:55.503 18:21:03 ftl -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:22:55.503 18:21:03 ftl -- common/autotest_common.sh@868 -- # return 0 00:22:55.503 18:21:03 ftl -- ftl/ftl.sh@40 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_set_options -d 00:22:55.503 18:21:04 ftl -- ftl/ftl.sh@41 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py framework_start_init 00:22:55.503 18:21:05 ftl -- ftl/ftl.sh@43 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py load_subsystem_config -j /dev/fd/62 00:22:55.503 18:21:05 ftl -- ftl/ftl.sh@43 -- # /home/vagrant/spdk_repo/spdk/scripts/gen_nvme.sh 00:22:55.503 18:21:05 ftl -- ftl/ftl.sh@46 -- # cache_size=1310720 00:22:55.503 18:21:05 ftl -- ftl/ftl.sh@47 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs 00:22:55.503 18:21:05 ftl -- ftl/ftl.sh@47 -- # jq -r '.[] | select(.md_size==64 and .zoned == false and .num_blocks >= 1310720).driver_specific.nvme[].pci_address' 00:22:55.503 18:21:05 ftl -- ftl/ftl.sh@47 -- # cache_disks=0000:00:10.0 00:22:55.503 18:21:05 ftl -- ftl/ftl.sh@48 -- # for disk in $cache_disks 00:22:55.503 18:21:05 ftl -- ftl/ftl.sh@49 -- # nv_cache=0000:00:10.0 00:22:55.503 18:21:05 ftl -- ftl/ftl.sh@50 -- # break 00:22:55.503 18:21:05 ftl -- ftl/ftl.sh@53 -- # '[' -z 0000:00:10.0 ']' 00:22:55.503 18:21:05 ftl -- ftl/ftl.sh@59 -- # base_size=1310720 00:22:55.503 18:21:05 ftl -- ftl/ftl.sh@60 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs 00:22:55.503 18:21:05 ftl -- ftl/ftl.sh@60 -- # jq -r '.[] | select(.driver_specific.nvme[0].pci_address!="0000:00:10.0" and .zoned == false and .num_blocks >= 1310720).driver_specific.nvme[].pci_address' 00:22:55.503 18:21:05 ftl -- ftl/ftl.sh@60 -- # base_disks=0000:00:11.0 00:22:55.503 18:21:05 ftl -- ftl/ftl.sh@61 -- # for disk in $base_disks 00:22:55.503 18:21:05 ftl -- ftl/ftl.sh@62 -- # device=0000:00:11.0 00:22:55.503 18:21:05 ftl -- ftl/ftl.sh@63 -- # break 00:22:55.503 18:21:05 ftl -- ftl/ftl.sh@66 -- # killprocess 76729 00:22:55.503 18:21:05 ftl -- common/autotest_common.sh@954 -- # '[' -z 76729 ']' 00:22:55.503 18:21:05 ftl -- common/autotest_common.sh@958 -- # kill -0 76729 00:22:55.503 18:21:05 ftl -- common/autotest_common.sh@959 -- # uname 00:22:55.503 18:21:05 ftl -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:22:55.503 18:21:05 ftl -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 76729 00:22:55.503 killing process with pid 76729 00:22:55.503 18:21:05 ftl -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:22:55.503 18:21:05 ftl -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:22:55.503 18:21:05 ftl -- common/autotest_common.sh@972 -- # echo 'killing process with pid 76729' 00:22:55.503 18:21:05 ftl -- common/autotest_common.sh@973 -- # kill 76729 00:22:55.503 18:21:05 ftl -- common/autotest_common.sh@978 -- # wait 76729 00:22:58.034 18:21:08 ftl -- ftl/ftl.sh@68 -- # '[' -z 0000:00:11.0 ']' 00:22:58.034 18:21:08 ftl -- ftl/ftl.sh@73 -- # run_test ftl_fio_basic /home/vagrant/spdk_repo/spdk/test/ftl/fio.sh 0000:00:11.0 0000:00:10.0 basic 00:22:58.034 18:21:08 ftl -- common/autotest_common.sh@1105 -- # '[' 5 -le 1 ']' 00:22:58.034 18:21:08 ftl -- common/autotest_common.sh@1111 -- # xtrace_disable 00:22:58.034 18:21:08 ftl -- common/autotest_common.sh@10 -- # set +x 00:22:58.034 ************************************ 00:22:58.034 START TEST ftl_fio_basic 00:22:58.034 ************************************ 00:22:58.034 18:21:08 ftl.ftl_fio_basic -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/ftl/fio.sh 0000:00:11.0 0000:00:10.0 basic 00:22:58.034 * Looking for test storage... 00:22:58.034 * Found test storage at /home/vagrant/spdk_repo/spdk/test/ftl 00:22:58.034 18:21:08 ftl.ftl_fio_basic -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:22:58.034 18:21:08 ftl.ftl_fio_basic -- common/autotest_common.sh@1711 -- # lcov --version 00:22:58.034 18:21:08 ftl.ftl_fio_basic -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:22:58.034 18:21:08 ftl.ftl_fio_basic -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:22:58.034 18:21:08 ftl.ftl_fio_basic -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:22:58.034 18:21:08 ftl.ftl_fio_basic -- scripts/common.sh@333 -- # local ver1 ver1_l 00:22:58.034 18:21:08 ftl.ftl_fio_basic -- scripts/common.sh@334 -- # local ver2 ver2_l 00:22:58.034 18:21:08 ftl.ftl_fio_basic -- scripts/common.sh@336 -- # IFS=.-: 00:22:58.034 18:21:08 ftl.ftl_fio_basic -- scripts/common.sh@336 -- # read -ra ver1 00:22:58.034 18:21:08 ftl.ftl_fio_basic -- scripts/common.sh@337 -- # IFS=.-: 00:22:58.034 18:21:08 ftl.ftl_fio_basic -- scripts/common.sh@337 -- # read -ra ver2 00:22:58.034 18:21:08 ftl.ftl_fio_basic -- scripts/common.sh@338 -- # local 'op=<' 00:22:58.034 18:21:08 ftl.ftl_fio_basic -- scripts/common.sh@340 -- # ver1_l=2 00:22:58.034 18:21:08 ftl.ftl_fio_basic -- scripts/common.sh@341 -- # ver2_l=1 00:22:58.034 18:21:08 ftl.ftl_fio_basic -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:22:58.034 18:21:08 ftl.ftl_fio_basic -- scripts/common.sh@344 -- # case "$op" in 00:22:58.034 18:21:08 ftl.ftl_fio_basic -- scripts/common.sh@345 -- # : 1 00:22:58.034 18:21:08 ftl.ftl_fio_basic -- scripts/common.sh@364 -- # (( v = 0 )) 00:22:58.034 18:21:08 ftl.ftl_fio_basic -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:22:58.034 18:21:08 ftl.ftl_fio_basic -- scripts/common.sh@365 -- # decimal 1 00:22:58.034 18:21:08 ftl.ftl_fio_basic -- scripts/common.sh@353 -- # local d=1 00:22:58.034 18:21:08 ftl.ftl_fio_basic -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:22:58.034 18:21:08 ftl.ftl_fio_basic -- scripts/common.sh@355 -- # echo 1 00:22:58.034 18:21:08 ftl.ftl_fio_basic -- scripts/common.sh@365 -- # ver1[v]=1 00:22:58.034 18:21:08 ftl.ftl_fio_basic -- scripts/common.sh@366 -- # decimal 2 00:22:58.034 18:21:08 ftl.ftl_fio_basic -- scripts/common.sh@353 -- # local d=2 00:22:58.034 18:21:08 ftl.ftl_fio_basic -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:22:58.034 18:21:08 ftl.ftl_fio_basic -- scripts/common.sh@355 -- # echo 2 00:22:58.034 18:21:08 ftl.ftl_fio_basic -- scripts/common.sh@366 -- # ver2[v]=2 00:22:58.034 18:21:08 ftl.ftl_fio_basic -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:22:58.034 18:21:08 ftl.ftl_fio_basic -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:22:58.034 18:21:08 ftl.ftl_fio_basic -- scripts/common.sh@368 -- # return 0 00:22:58.034 18:21:08 ftl.ftl_fio_basic -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:22:58.034 18:21:08 ftl.ftl_fio_basic -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:22:58.034 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:22:58.034 --rc genhtml_branch_coverage=1 00:22:58.034 --rc genhtml_function_coverage=1 00:22:58.034 --rc genhtml_legend=1 00:22:58.034 --rc geninfo_all_blocks=1 00:22:58.034 --rc geninfo_unexecuted_blocks=1 00:22:58.034 00:22:58.034 ' 00:22:58.034 18:21:08 ftl.ftl_fio_basic -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:22:58.034 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:22:58.034 --rc genhtml_branch_coverage=1 00:22:58.034 --rc genhtml_function_coverage=1 00:22:58.034 --rc genhtml_legend=1 00:22:58.034 --rc geninfo_all_blocks=1 00:22:58.034 --rc geninfo_unexecuted_blocks=1 00:22:58.034 00:22:58.034 ' 00:22:58.034 18:21:08 ftl.ftl_fio_basic -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:22:58.034 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:22:58.034 --rc genhtml_branch_coverage=1 00:22:58.034 --rc genhtml_function_coverage=1 00:22:58.034 --rc genhtml_legend=1 00:22:58.034 --rc geninfo_all_blocks=1 00:22:58.034 --rc geninfo_unexecuted_blocks=1 00:22:58.034 00:22:58.034 ' 00:22:58.034 18:21:08 ftl.ftl_fio_basic -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:22:58.034 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:22:58.034 --rc genhtml_branch_coverage=1 00:22:58.034 --rc genhtml_function_coverage=1 00:22:58.034 --rc genhtml_legend=1 00:22:58.034 --rc geninfo_all_blocks=1 00:22:58.034 --rc geninfo_unexecuted_blocks=1 00:22:58.034 00:22:58.034 ' 00:22:58.034 18:21:08 ftl.ftl_fio_basic -- ftl/fio.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/ftl/common.sh 00:22:58.034 18:21:08 ftl.ftl_fio_basic -- ftl/common.sh@8 -- # dirname /home/vagrant/spdk_repo/spdk/test/ftl/fio.sh 00:22:58.034 18:21:08 ftl.ftl_fio_basic -- ftl/common.sh@8 -- # readlink -f /home/vagrant/spdk_repo/spdk/test/ftl 00:22:58.034 18:21:08 ftl.ftl_fio_basic -- ftl/common.sh@8 -- # testdir=/home/vagrant/spdk_repo/spdk/test/ftl 00:22:58.034 18:21:08 ftl.ftl_fio_basic -- ftl/common.sh@9 -- # readlink -f /home/vagrant/spdk_repo/spdk/test/ftl/../.. 00:22:58.035 18:21:08 ftl.ftl_fio_basic -- ftl/common.sh@9 -- # rootdir=/home/vagrant/spdk_repo/spdk 00:22:58.035 18:21:08 ftl.ftl_fio_basic -- ftl/common.sh@10 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:22:58.035 18:21:08 ftl.ftl_fio_basic -- ftl/common.sh@12 -- # export 'ftl_tgt_core_mask=[0]' 00:22:58.035 18:21:08 ftl.ftl_fio_basic -- ftl/common.sh@12 -- # ftl_tgt_core_mask='[0]' 00:22:58.035 18:21:08 ftl.ftl_fio_basic -- ftl/common.sh@14 -- # export spdk_tgt_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:22:58.035 18:21:08 ftl.ftl_fio_basic -- ftl/common.sh@14 -- # spdk_tgt_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:22:58.035 18:21:08 ftl.ftl_fio_basic -- ftl/common.sh@15 -- # export 'spdk_tgt_cpumask=[0]' 00:22:58.035 18:21:08 ftl.ftl_fio_basic -- ftl/common.sh@15 -- # spdk_tgt_cpumask='[0]' 00:22:58.035 18:21:08 ftl.ftl_fio_basic -- ftl/common.sh@16 -- # export spdk_tgt_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json 00:22:58.035 18:21:08 ftl.ftl_fio_basic -- ftl/common.sh@16 -- # spdk_tgt_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json 00:22:58.035 18:21:08 ftl.ftl_fio_basic -- ftl/common.sh@17 -- # export spdk_tgt_pid= 00:22:58.035 18:21:08 ftl.ftl_fio_basic -- ftl/common.sh@17 -- # spdk_tgt_pid= 00:22:58.035 18:21:08 ftl.ftl_fio_basic -- ftl/common.sh@19 -- # export spdk_ini_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:22:58.035 18:21:08 ftl.ftl_fio_basic -- ftl/common.sh@19 -- # spdk_ini_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:22:58.035 18:21:08 ftl.ftl_fio_basic -- ftl/common.sh@20 -- # export 'spdk_ini_cpumask=[1]' 00:22:58.035 18:21:08 ftl.ftl_fio_basic -- ftl/common.sh@20 -- # spdk_ini_cpumask='[1]' 00:22:58.035 18:21:08 ftl.ftl_fio_basic -- ftl/common.sh@21 -- # export spdk_ini_rpc=/var/tmp/spdk.tgt.sock 00:22:58.035 18:21:08 ftl.ftl_fio_basic -- ftl/common.sh@21 -- # spdk_ini_rpc=/var/tmp/spdk.tgt.sock 00:22:58.035 18:21:08 ftl.ftl_fio_basic -- ftl/common.sh@22 -- # export spdk_ini_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json 00:22:58.035 18:21:08 ftl.ftl_fio_basic -- ftl/common.sh@22 -- # spdk_ini_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json 00:22:58.035 18:21:08 ftl.ftl_fio_basic -- ftl/common.sh@23 -- # export spdk_ini_pid= 00:22:58.035 18:21:08 ftl.ftl_fio_basic -- ftl/common.sh@23 -- # spdk_ini_pid= 00:22:58.035 18:21:08 ftl.ftl_fio_basic -- ftl/common.sh@25 -- # export spdk_dd_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:22:58.035 18:21:08 ftl.ftl_fio_basic -- ftl/common.sh@25 -- # spdk_dd_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:22:58.035 18:21:08 ftl.ftl_fio_basic -- ftl/fio.sh@11 -- # declare -A suite 00:22:58.035 18:21:08 ftl.ftl_fio_basic -- ftl/fio.sh@12 -- # suite['basic']='randw-verify randw-verify-j2 randw-verify-depth128' 00:22:58.035 18:21:08 ftl.ftl_fio_basic -- ftl/fio.sh@13 -- # suite['extended']='drive-prep randw-verify-qd128-ext randw-verify-qd2048-ext randw randr randrw unmap' 00:22:58.035 18:21:08 ftl.ftl_fio_basic -- ftl/fio.sh@14 -- # suite['nightly']='drive-prep randw-verify-qd256-nght randw-verify-qd256-nght randw-verify-qd256-nght' 00:22:58.035 18:21:08 ftl.ftl_fio_basic -- ftl/fio.sh@16 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:22:58.035 18:21:08 ftl.ftl_fio_basic -- ftl/fio.sh@23 -- # device=0000:00:11.0 00:22:58.035 18:21:08 ftl.ftl_fio_basic -- ftl/fio.sh@24 -- # cache_device=0000:00:10.0 00:22:58.035 18:21:08 ftl.ftl_fio_basic -- ftl/fio.sh@25 -- # tests='randw-verify randw-verify-j2 randw-verify-depth128' 00:22:58.035 18:21:08 ftl.ftl_fio_basic -- ftl/fio.sh@26 -- # uuid= 00:22:58.035 18:21:08 ftl.ftl_fio_basic -- ftl/fio.sh@27 -- # timeout=240 00:22:58.035 18:21:08 ftl.ftl_fio_basic -- ftl/fio.sh@29 -- # [[ y != y ]] 00:22:58.035 18:21:08 ftl.ftl_fio_basic -- ftl/fio.sh@34 -- # '[' -z 'randw-verify randw-verify-j2 randw-verify-depth128' ']' 00:22:58.035 18:21:08 ftl.ftl_fio_basic -- ftl/fio.sh@39 -- # export FTL_BDEV_NAME=ftl0 00:22:58.035 18:21:08 ftl.ftl_fio_basic -- ftl/fio.sh@39 -- # FTL_BDEV_NAME=ftl0 00:22:58.035 18:21:08 ftl.ftl_fio_basic -- ftl/fio.sh@40 -- # export FTL_JSON_CONF=/home/vagrant/spdk_repo/spdk/test/ftl/config/ftl.json 00:22:58.035 18:21:08 ftl.ftl_fio_basic -- ftl/fio.sh@40 -- # FTL_JSON_CONF=/home/vagrant/spdk_repo/spdk/test/ftl/config/ftl.json 00:22:58.035 18:21:08 ftl.ftl_fio_basic -- ftl/fio.sh@42 -- # trap 'fio_kill; exit 1' SIGINT SIGTERM EXIT 00:22:58.035 18:21:08 ftl.ftl_fio_basic -- ftl/fio.sh@45 -- # svcpid=76878 00:22:58.035 18:21:08 ftl.ftl_fio_basic -- ftl/fio.sh@46 -- # waitforlisten 76878 00:22:58.035 18:21:08 ftl.ftl_fio_basic -- ftl/fio.sh@44 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 7 00:22:58.035 18:21:08 ftl.ftl_fio_basic -- common/autotest_common.sh@835 -- # '[' -z 76878 ']' 00:22:58.035 18:21:08 ftl.ftl_fio_basic -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:22:58.035 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:22:58.035 18:21:08 ftl.ftl_fio_basic -- common/autotest_common.sh@840 -- # local max_retries=100 00:22:58.035 18:21:08 ftl.ftl_fio_basic -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:22:58.035 18:21:08 ftl.ftl_fio_basic -- common/autotest_common.sh@844 -- # xtrace_disable 00:22:58.035 18:21:08 ftl.ftl_fio_basic -- common/autotest_common.sh@10 -- # set +x 00:22:58.294 [2024-12-06 18:21:08.696706] Starting SPDK v25.01-pre git sha1 1148849d6 / DPDK 24.03.0 initialization... 00:22:58.294 [2024-12-06 18:21:08.697026] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 7 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid76878 ] 00:22:58.551 [2024-12-06 18:21:08.878327] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:22:58.551 [2024-12-06 18:21:08.988162] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:22:58.551 [2024-12-06 18:21:08.988295] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:22:58.551 [2024-12-06 18:21:08.988350] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:22:59.486 18:21:09 ftl.ftl_fio_basic -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:22:59.486 18:21:09 ftl.ftl_fio_basic -- common/autotest_common.sh@868 -- # return 0 00:22:59.486 18:21:09 ftl.ftl_fio_basic -- ftl/fio.sh@48 -- # create_base_bdev nvme0 0000:00:11.0 103424 00:22:59.486 18:21:09 ftl.ftl_fio_basic -- ftl/common.sh@54 -- # local name=nvme0 00:22:59.486 18:21:09 ftl.ftl_fio_basic -- ftl/common.sh@55 -- # local base_bdf=0000:00:11.0 00:22:59.486 18:21:09 ftl.ftl_fio_basic -- ftl/common.sh@56 -- # local size=103424 00:22:59.486 18:21:09 ftl.ftl_fio_basic -- ftl/common.sh@59 -- # local base_bdev 00:22:59.486 18:21:09 ftl.ftl_fio_basic -- ftl/common.sh@60 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_nvme_attach_controller -b nvme0 -t PCIe -a 0000:00:11.0 00:22:59.745 18:21:10 ftl.ftl_fio_basic -- ftl/common.sh@60 -- # base_bdev=nvme0n1 00:22:59.745 18:21:10 ftl.ftl_fio_basic -- ftl/common.sh@62 -- # local base_size 00:22:59.745 18:21:10 ftl.ftl_fio_basic -- ftl/common.sh@63 -- # get_bdev_size nvme0n1 00:22:59.745 18:21:10 ftl.ftl_fio_basic -- common/autotest_common.sh@1382 -- # local bdev_name=nvme0n1 00:22:59.745 18:21:10 ftl.ftl_fio_basic -- common/autotest_common.sh@1383 -- # local bdev_info 00:22:59.745 18:21:10 ftl.ftl_fio_basic -- common/autotest_common.sh@1384 -- # local bs 00:22:59.745 18:21:10 ftl.ftl_fio_basic -- common/autotest_common.sh@1385 -- # local nb 00:22:59.745 18:21:10 ftl.ftl_fio_basic -- common/autotest_common.sh@1386 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b nvme0n1 00:23:00.004 18:21:10 ftl.ftl_fio_basic -- common/autotest_common.sh@1386 -- # bdev_info='[ 00:23:00.004 { 00:23:00.004 "name": "nvme0n1", 00:23:00.004 "aliases": [ 00:23:00.004 "0ed77d7d-af77-4b52-a92e-701b2caada18" 00:23:00.004 ], 00:23:00.004 "product_name": "NVMe disk", 00:23:00.004 "block_size": 4096, 00:23:00.004 "num_blocks": 1310720, 00:23:00.004 "uuid": "0ed77d7d-af77-4b52-a92e-701b2caada18", 00:23:00.004 "numa_id": -1, 00:23:00.004 "assigned_rate_limits": { 00:23:00.004 "rw_ios_per_sec": 0, 00:23:00.004 "rw_mbytes_per_sec": 0, 00:23:00.004 "r_mbytes_per_sec": 0, 00:23:00.004 "w_mbytes_per_sec": 0 00:23:00.004 }, 00:23:00.004 "claimed": false, 00:23:00.004 "zoned": false, 00:23:00.004 "supported_io_types": { 00:23:00.004 "read": true, 00:23:00.004 "write": true, 00:23:00.004 "unmap": true, 00:23:00.004 "flush": true, 00:23:00.004 "reset": true, 00:23:00.004 "nvme_admin": true, 00:23:00.004 "nvme_io": true, 00:23:00.004 "nvme_io_md": false, 00:23:00.004 "write_zeroes": true, 00:23:00.004 "zcopy": false, 00:23:00.004 "get_zone_info": false, 00:23:00.004 "zone_management": false, 00:23:00.004 "zone_append": false, 00:23:00.004 "compare": true, 00:23:00.004 "compare_and_write": false, 00:23:00.004 "abort": true, 00:23:00.004 "seek_hole": false, 00:23:00.004 "seek_data": false, 00:23:00.004 "copy": true, 00:23:00.004 "nvme_iov_md": false 00:23:00.004 }, 00:23:00.004 "driver_specific": { 00:23:00.004 "nvme": [ 00:23:00.004 { 00:23:00.004 "pci_address": "0000:00:11.0", 00:23:00.004 "trid": { 00:23:00.004 "trtype": "PCIe", 00:23:00.004 "traddr": "0000:00:11.0" 00:23:00.004 }, 00:23:00.004 "ctrlr_data": { 00:23:00.004 "cntlid": 0, 00:23:00.004 "vendor_id": "0x1b36", 00:23:00.004 "model_number": "QEMU NVMe Ctrl", 00:23:00.004 "serial_number": "12341", 00:23:00.004 "firmware_revision": "8.0.0", 00:23:00.004 "subnqn": "nqn.2019-08.org.qemu:12341", 00:23:00.004 "oacs": { 00:23:00.004 "security": 0, 00:23:00.004 "format": 1, 00:23:00.004 "firmware": 0, 00:23:00.004 "ns_manage": 1 00:23:00.004 }, 00:23:00.004 "multi_ctrlr": false, 00:23:00.004 "ana_reporting": false 00:23:00.004 }, 00:23:00.004 "vs": { 00:23:00.004 "nvme_version": "1.4" 00:23:00.004 }, 00:23:00.004 "ns_data": { 00:23:00.004 "id": 1, 00:23:00.004 "can_share": false 00:23:00.004 } 00:23:00.004 } 00:23:00.004 ], 00:23:00.004 "mp_policy": "active_passive" 00:23:00.004 } 00:23:00.004 } 00:23:00.004 ]' 00:23:00.004 18:21:10 ftl.ftl_fio_basic -- common/autotest_common.sh@1387 -- # jq '.[] .block_size' 00:23:00.004 18:21:10 ftl.ftl_fio_basic -- common/autotest_common.sh@1387 -- # bs=4096 00:23:00.004 18:21:10 ftl.ftl_fio_basic -- common/autotest_common.sh@1388 -- # jq '.[] .num_blocks' 00:23:00.004 18:21:10 ftl.ftl_fio_basic -- common/autotest_common.sh@1388 -- # nb=1310720 00:23:00.004 18:21:10 ftl.ftl_fio_basic -- common/autotest_common.sh@1391 -- # bdev_size=5120 00:23:00.004 18:21:10 ftl.ftl_fio_basic -- common/autotest_common.sh@1392 -- # echo 5120 00:23:00.004 18:21:10 ftl.ftl_fio_basic -- ftl/common.sh@63 -- # base_size=5120 00:23:00.004 18:21:10 ftl.ftl_fio_basic -- ftl/common.sh@64 -- # [[ 103424 -le 5120 ]] 00:23:00.004 18:21:10 ftl.ftl_fio_basic -- ftl/common.sh@67 -- # clear_lvols 00:23:00.004 18:21:10 ftl.ftl_fio_basic -- ftl/common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores 00:23:00.004 18:21:10 ftl.ftl_fio_basic -- ftl/common.sh@28 -- # jq -r '.[] | .uuid' 00:23:00.263 18:21:10 ftl.ftl_fio_basic -- ftl/common.sh@28 -- # stores= 00:23:00.263 18:21:10 ftl.ftl_fio_basic -- ftl/common.sh@68 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create_lvstore nvme0n1 lvs 00:23:00.523 18:21:10 ftl.ftl_fio_basic -- ftl/common.sh@68 -- # lvs=517306c0-88c4-485d-9ff8-844e0e30e968 00:23:00.523 18:21:10 ftl.ftl_fio_basic -- ftl/common.sh@69 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create nvme0n1p0 103424 -t -u 517306c0-88c4-485d-9ff8-844e0e30e968 00:23:00.523 18:21:11 ftl.ftl_fio_basic -- ftl/fio.sh@48 -- # split_bdev=bff7f7f6-3914-40e9-9793-9ee67cf4bace 00:23:00.523 18:21:11 ftl.ftl_fio_basic -- ftl/fio.sh@49 -- # create_nv_cache_bdev nvc0 0000:00:10.0 bff7f7f6-3914-40e9-9793-9ee67cf4bace 00:23:00.523 18:21:11 ftl.ftl_fio_basic -- ftl/common.sh@35 -- # local name=nvc0 00:23:00.523 18:21:11 ftl.ftl_fio_basic -- ftl/common.sh@36 -- # local cache_bdf=0000:00:10.0 00:23:00.523 18:21:11 ftl.ftl_fio_basic -- ftl/common.sh@37 -- # local base_bdev=bff7f7f6-3914-40e9-9793-9ee67cf4bace 00:23:00.523 18:21:11 ftl.ftl_fio_basic -- ftl/common.sh@38 -- # local cache_size= 00:23:00.523 18:21:11 ftl.ftl_fio_basic -- ftl/common.sh@41 -- # get_bdev_size bff7f7f6-3914-40e9-9793-9ee67cf4bace 00:23:00.523 18:21:11 ftl.ftl_fio_basic -- common/autotest_common.sh@1382 -- # local bdev_name=bff7f7f6-3914-40e9-9793-9ee67cf4bace 00:23:00.523 18:21:11 ftl.ftl_fio_basic -- common/autotest_common.sh@1383 -- # local bdev_info 00:23:00.523 18:21:11 ftl.ftl_fio_basic -- common/autotest_common.sh@1384 -- # local bs 00:23:00.523 18:21:11 ftl.ftl_fio_basic -- common/autotest_common.sh@1385 -- # local nb 00:23:00.523 18:21:11 ftl.ftl_fio_basic -- common/autotest_common.sh@1386 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b bff7f7f6-3914-40e9-9793-9ee67cf4bace 00:23:00.783 18:21:11 ftl.ftl_fio_basic -- common/autotest_common.sh@1386 -- # bdev_info='[ 00:23:00.783 { 00:23:00.783 "name": "bff7f7f6-3914-40e9-9793-9ee67cf4bace", 00:23:00.783 "aliases": [ 00:23:00.783 "lvs/nvme0n1p0" 00:23:00.783 ], 00:23:00.783 "product_name": "Logical Volume", 00:23:00.783 "block_size": 4096, 00:23:00.783 "num_blocks": 26476544, 00:23:00.783 "uuid": "bff7f7f6-3914-40e9-9793-9ee67cf4bace", 00:23:00.783 "assigned_rate_limits": { 00:23:00.783 "rw_ios_per_sec": 0, 00:23:00.783 "rw_mbytes_per_sec": 0, 00:23:00.783 "r_mbytes_per_sec": 0, 00:23:00.783 "w_mbytes_per_sec": 0 00:23:00.783 }, 00:23:00.783 "claimed": false, 00:23:00.783 "zoned": false, 00:23:00.783 "supported_io_types": { 00:23:00.783 "read": true, 00:23:00.783 "write": true, 00:23:00.783 "unmap": true, 00:23:00.783 "flush": false, 00:23:00.783 "reset": true, 00:23:00.783 "nvme_admin": false, 00:23:00.783 "nvme_io": false, 00:23:00.783 "nvme_io_md": false, 00:23:00.783 "write_zeroes": true, 00:23:00.783 "zcopy": false, 00:23:00.783 "get_zone_info": false, 00:23:00.783 "zone_management": false, 00:23:00.783 "zone_append": false, 00:23:00.783 "compare": false, 00:23:00.783 "compare_and_write": false, 00:23:00.783 "abort": false, 00:23:00.783 "seek_hole": true, 00:23:00.783 "seek_data": true, 00:23:00.783 "copy": false, 00:23:00.783 "nvme_iov_md": false 00:23:00.783 }, 00:23:00.783 "driver_specific": { 00:23:00.783 "lvol": { 00:23:00.783 "lvol_store_uuid": "517306c0-88c4-485d-9ff8-844e0e30e968", 00:23:00.783 "base_bdev": "nvme0n1", 00:23:00.783 "thin_provision": true, 00:23:00.783 "num_allocated_clusters": 0, 00:23:00.783 "snapshot": false, 00:23:00.783 "clone": false, 00:23:00.783 "esnap_clone": false 00:23:00.783 } 00:23:00.783 } 00:23:00.783 } 00:23:00.783 ]' 00:23:00.783 18:21:11 ftl.ftl_fio_basic -- common/autotest_common.sh@1387 -- # jq '.[] .block_size' 00:23:00.783 18:21:11 ftl.ftl_fio_basic -- common/autotest_common.sh@1387 -- # bs=4096 00:23:00.783 18:21:11 ftl.ftl_fio_basic -- common/autotest_common.sh@1388 -- # jq '.[] .num_blocks' 00:23:00.783 18:21:11 ftl.ftl_fio_basic -- common/autotest_common.sh@1388 -- # nb=26476544 00:23:00.783 18:21:11 ftl.ftl_fio_basic -- common/autotest_common.sh@1391 -- # bdev_size=103424 00:23:00.783 18:21:11 ftl.ftl_fio_basic -- common/autotest_common.sh@1392 -- # echo 103424 00:23:00.783 18:21:11 ftl.ftl_fio_basic -- ftl/common.sh@41 -- # local base_size=5171 00:23:00.783 18:21:11 ftl.ftl_fio_basic -- ftl/common.sh@44 -- # local nvc_bdev 00:23:00.783 18:21:11 ftl.ftl_fio_basic -- ftl/common.sh@45 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_nvme_attach_controller -b nvc0 -t PCIe -a 0000:00:10.0 00:23:01.042 18:21:11 ftl.ftl_fio_basic -- ftl/common.sh@45 -- # nvc_bdev=nvc0n1 00:23:01.042 18:21:11 ftl.ftl_fio_basic -- ftl/common.sh@47 -- # [[ -z '' ]] 00:23:01.042 18:21:11 ftl.ftl_fio_basic -- ftl/common.sh@48 -- # get_bdev_size bff7f7f6-3914-40e9-9793-9ee67cf4bace 00:23:01.042 18:21:11 ftl.ftl_fio_basic -- common/autotest_common.sh@1382 -- # local bdev_name=bff7f7f6-3914-40e9-9793-9ee67cf4bace 00:23:01.042 18:21:11 ftl.ftl_fio_basic -- common/autotest_common.sh@1383 -- # local bdev_info 00:23:01.042 18:21:11 ftl.ftl_fio_basic -- common/autotest_common.sh@1384 -- # local bs 00:23:01.042 18:21:11 ftl.ftl_fio_basic -- common/autotest_common.sh@1385 -- # local nb 00:23:01.042 18:21:11 ftl.ftl_fio_basic -- common/autotest_common.sh@1386 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b bff7f7f6-3914-40e9-9793-9ee67cf4bace 00:23:01.302 18:21:11 ftl.ftl_fio_basic -- common/autotest_common.sh@1386 -- # bdev_info='[ 00:23:01.302 { 00:23:01.302 "name": "bff7f7f6-3914-40e9-9793-9ee67cf4bace", 00:23:01.302 "aliases": [ 00:23:01.302 "lvs/nvme0n1p0" 00:23:01.302 ], 00:23:01.302 "product_name": "Logical Volume", 00:23:01.302 "block_size": 4096, 00:23:01.302 "num_blocks": 26476544, 00:23:01.302 "uuid": "bff7f7f6-3914-40e9-9793-9ee67cf4bace", 00:23:01.302 "assigned_rate_limits": { 00:23:01.302 "rw_ios_per_sec": 0, 00:23:01.302 "rw_mbytes_per_sec": 0, 00:23:01.302 "r_mbytes_per_sec": 0, 00:23:01.302 "w_mbytes_per_sec": 0 00:23:01.302 }, 00:23:01.302 "claimed": false, 00:23:01.302 "zoned": false, 00:23:01.302 "supported_io_types": { 00:23:01.302 "read": true, 00:23:01.302 "write": true, 00:23:01.302 "unmap": true, 00:23:01.302 "flush": false, 00:23:01.302 "reset": true, 00:23:01.302 "nvme_admin": false, 00:23:01.302 "nvme_io": false, 00:23:01.302 "nvme_io_md": false, 00:23:01.302 "write_zeroes": true, 00:23:01.302 "zcopy": false, 00:23:01.302 "get_zone_info": false, 00:23:01.302 "zone_management": false, 00:23:01.302 "zone_append": false, 00:23:01.302 "compare": false, 00:23:01.302 "compare_and_write": false, 00:23:01.302 "abort": false, 00:23:01.302 "seek_hole": true, 00:23:01.302 "seek_data": true, 00:23:01.302 "copy": false, 00:23:01.302 "nvme_iov_md": false 00:23:01.302 }, 00:23:01.302 "driver_specific": { 00:23:01.302 "lvol": { 00:23:01.302 "lvol_store_uuid": "517306c0-88c4-485d-9ff8-844e0e30e968", 00:23:01.302 "base_bdev": "nvme0n1", 00:23:01.302 "thin_provision": true, 00:23:01.302 "num_allocated_clusters": 0, 00:23:01.302 "snapshot": false, 00:23:01.302 "clone": false, 00:23:01.302 "esnap_clone": false 00:23:01.302 } 00:23:01.302 } 00:23:01.302 } 00:23:01.302 ]' 00:23:01.302 18:21:11 ftl.ftl_fio_basic -- common/autotest_common.sh@1387 -- # jq '.[] .block_size' 00:23:01.302 18:21:11 ftl.ftl_fio_basic -- common/autotest_common.sh@1387 -- # bs=4096 00:23:01.302 18:21:11 ftl.ftl_fio_basic -- common/autotest_common.sh@1388 -- # jq '.[] .num_blocks' 00:23:01.562 18:21:11 ftl.ftl_fio_basic -- common/autotest_common.sh@1388 -- # nb=26476544 00:23:01.562 18:21:11 ftl.ftl_fio_basic -- common/autotest_common.sh@1391 -- # bdev_size=103424 00:23:01.562 18:21:11 ftl.ftl_fio_basic -- common/autotest_common.sh@1392 -- # echo 103424 00:23:01.562 18:21:11 ftl.ftl_fio_basic -- ftl/common.sh@48 -- # cache_size=5171 00:23:01.562 18:21:11 ftl.ftl_fio_basic -- ftl/common.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_split_create nvc0n1 -s 5171 1 00:23:01.562 18:21:12 ftl.ftl_fio_basic -- ftl/fio.sh@49 -- # nv_cache=nvc0n1p0 00:23:01.562 18:21:12 ftl.ftl_fio_basic -- ftl/fio.sh@51 -- # l2p_percentage=60 00:23:01.562 18:21:12 ftl.ftl_fio_basic -- ftl/fio.sh@52 -- # '[' -eq 1 ']' 00:23:01.562 /home/vagrant/spdk_repo/spdk/test/ftl/fio.sh: line 52: [: -eq: unary operator expected 00:23:01.562 18:21:12 ftl.ftl_fio_basic -- ftl/fio.sh@56 -- # get_bdev_size bff7f7f6-3914-40e9-9793-9ee67cf4bace 00:23:01.562 18:21:12 ftl.ftl_fio_basic -- common/autotest_common.sh@1382 -- # local bdev_name=bff7f7f6-3914-40e9-9793-9ee67cf4bace 00:23:01.562 18:21:12 ftl.ftl_fio_basic -- common/autotest_common.sh@1383 -- # local bdev_info 00:23:01.562 18:21:12 ftl.ftl_fio_basic -- common/autotest_common.sh@1384 -- # local bs 00:23:01.562 18:21:12 ftl.ftl_fio_basic -- common/autotest_common.sh@1385 -- # local nb 00:23:01.562 18:21:12 ftl.ftl_fio_basic -- common/autotest_common.sh@1386 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b bff7f7f6-3914-40e9-9793-9ee67cf4bace 00:23:01.822 18:21:12 ftl.ftl_fio_basic -- common/autotest_common.sh@1386 -- # bdev_info='[ 00:23:01.822 { 00:23:01.822 "name": "bff7f7f6-3914-40e9-9793-9ee67cf4bace", 00:23:01.822 "aliases": [ 00:23:01.822 "lvs/nvme0n1p0" 00:23:01.822 ], 00:23:01.822 "product_name": "Logical Volume", 00:23:01.822 "block_size": 4096, 00:23:01.822 "num_blocks": 26476544, 00:23:01.822 "uuid": "bff7f7f6-3914-40e9-9793-9ee67cf4bace", 00:23:01.822 "assigned_rate_limits": { 00:23:01.822 "rw_ios_per_sec": 0, 00:23:01.822 "rw_mbytes_per_sec": 0, 00:23:01.822 "r_mbytes_per_sec": 0, 00:23:01.822 "w_mbytes_per_sec": 0 00:23:01.822 }, 00:23:01.822 "claimed": false, 00:23:01.822 "zoned": false, 00:23:01.822 "supported_io_types": { 00:23:01.822 "read": true, 00:23:01.822 "write": true, 00:23:01.822 "unmap": true, 00:23:01.822 "flush": false, 00:23:01.822 "reset": true, 00:23:01.822 "nvme_admin": false, 00:23:01.822 "nvme_io": false, 00:23:01.822 "nvme_io_md": false, 00:23:01.822 "write_zeroes": true, 00:23:01.822 "zcopy": false, 00:23:01.822 "get_zone_info": false, 00:23:01.822 "zone_management": false, 00:23:01.822 "zone_append": false, 00:23:01.822 "compare": false, 00:23:01.822 "compare_and_write": false, 00:23:01.822 "abort": false, 00:23:01.822 "seek_hole": true, 00:23:01.822 "seek_data": true, 00:23:01.822 "copy": false, 00:23:01.822 "nvme_iov_md": false 00:23:01.822 }, 00:23:01.822 "driver_specific": { 00:23:01.822 "lvol": { 00:23:01.822 "lvol_store_uuid": "517306c0-88c4-485d-9ff8-844e0e30e968", 00:23:01.822 "base_bdev": "nvme0n1", 00:23:01.822 "thin_provision": true, 00:23:01.822 "num_allocated_clusters": 0, 00:23:01.822 "snapshot": false, 00:23:01.822 "clone": false, 00:23:01.822 "esnap_clone": false 00:23:01.822 } 00:23:01.822 } 00:23:01.822 } 00:23:01.822 ]' 00:23:01.822 18:21:12 ftl.ftl_fio_basic -- common/autotest_common.sh@1387 -- # jq '.[] .block_size' 00:23:01.822 18:21:12 ftl.ftl_fio_basic -- common/autotest_common.sh@1387 -- # bs=4096 00:23:01.822 18:21:12 ftl.ftl_fio_basic -- common/autotest_common.sh@1388 -- # jq '.[] .num_blocks' 00:23:01.822 18:21:12 ftl.ftl_fio_basic -- common/autotest_common.sh@1388 -- # nb=26476544 00:23:01.822 18:21:12 ftl.ftl_fio_basic -- common/autotest_common.sh@1391 -- # bdev_size=103424 00:23:01.822 18:21:12 ftl.ftl_fio_basic -- common/autotest_common.sh@1392 -- # echo 103424 00:23:01.822 18:21:12 ftl.ftl_fio_basic -- ftl/fio.sh@56 -- # l2p_dram_size_mb=60 00:23:01.822 18:21:12 ftl.ftl_fio_basic -- ftl/fio.sh@58 -- # '[' -z '' ']' 00:23:01.822 18:21:12 ftl.ftl_fio_basic -- ftl/fio.sh@60 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -t 240 bdev_ftl_create -b ftl0 -d bff7f7f6-3914-40e9-9793-9ee67cf4bace -c nvc0n1p0 --l2p_dram_limit 60 00:23:02.109 [2024-12-06 18:21:12.540649] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:02.109 [2024-12-06 18:21:12.540884] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Check configuration 00:23:02.109 [2024-12-06 18:21:12.540916] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.004 ms 00:23:02.109 [2024-12-06 18:21:12.540928] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:02.109 [2024-12-06 18:21:12.541025] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:02.109 [2024-12-06 18:21:12.541041] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:23:02.109 [2024-12-06 18:21:12.541057] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.050 ms 00:23:02.109 [2024-12-06 18:21:12.541069] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:02.109 [2024-12-06 18:21:12.541115] mngt/ftl_mngt_bdev.c: 196:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using nvc0n1p0 as write buffer cache 00:23:02.109 [2024-12-06 18:21:12.542235] mngt/ftl_mngt_bdev.c: 236:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using bdev as NV Cache device 00:23:02.110 [2024-12-06 18:21:12.542262] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:02.110 [2024-12-06 18:21:12.542284] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:23:02.110 [2024-12-06 18:21:12.542298] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.165 ms 00:23:02.110 [2024-12-06 18:21:12.542308] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:02.110 [2024-12-06 18:21:12.542398] mngt/ftl_mngt_md.c: 570:ftl_mngt_superblock_init: *NOTICE*: [FTL][ftl0] Create new FTL, UUID a111d682-7c0f-402b-a7a8-41abac3647fe 00:23:02.110 [2024-12-06 18:21:12.543914] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:02.110 [2024-12-06 18:21:12.544058] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Default-initialize superblock 00:23:02.110 [2024-12-06 18:21:12.544079] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.016 ms 00:23:02.110 [2024-12-06 18:21:12.544092] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:02.110 [2024-12-06 18:21:12.551531] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:02.110 [2024-12-06 18:21:12.551565] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:23:02.110 [2024-12-06 18:21:12.551578] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 7.381 ms 00:23:02.110 [2024-12-06 18:21:12.551590] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:02.110 [2024-12-06 18:21:12.551705] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:02.110 [2024-12-06 18:21:12.551722] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:23:02.110 [2024-12-06 18:21:12.551733] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.084 ms 00:23:02.110 [2024-12-06 18:21:12.551750] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:02.110 [2024-12-06 18:21:12.551829] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:02.110 [2024-12-06 18:21:12.551844] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Register IO device 00:23:02.110 [2024-12-06 18:21:12.551855] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.007 ms 00:23:02.110 [2024-12-06 18:21:12.551867] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:02.110 [2024-12-06 18:21:12.551901] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:23:02.110 [2024-12-06 18:21:12.557255] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:02.110 [2024-12-06 18:21:12.557292] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:23:02.110 [2024-12-06 18:21:12.557309] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 5.367 ms 00:23:02.110 [2024-12-06 18:21:12.557322] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:02.110 [2024-12-06 18:21:12.557369] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:02.110 [2024-12-06 18:21:12.557380] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Decorate bands 00:23:02.110 [2024-12-06 18:21:12.557393] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.009 ms 00:23:02.110 [2024-12-06 18:21:12.557403] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:02.110 [2024-12-06 18:21:12.557454] ftl_layout.c: 613:ftl_layout_setup: *NOTICE*: [FTL][ftl0] FTL layout setup mode 1 00:23:02.110 [2024-12-06 18:21:12.557599] upgrade/ftl_sb_v5.c: 92:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob store 0x150 bytes 00:23:02.110 [2024-12-06 18:21:12.557621] upgrade/ftl_sb_v5.c: 101:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] base layout blob store 0x48 bytes 00:23:02.110 [2024-12-06 18:21:12.557635] upgrade/ftl_sb_v5.c: 109:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] layout blob store 0x190 bytes 00:23:02.110 [2024-12-06 18:21:12.557651] ftl_layout.c: 685:ftl_layout_setup: *NOTICE*: [FTL][ftl0] Base device capacity: 103424.00 MiB 00:23:02.110 [2024-12-06 18:21:12.557663] ftl_layout.c: 687:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache device capacity: 5171.00 MiB 00:23:02.110 [2024-12-06 18:21:12.557678] ftl_layout.c: 689:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P entries: 20971520 00:23:02.110 [2024-12-06 18:21:12.557687] ftl_layout.c: 690:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P address size: 4 00:23:02.110 [2024-12-06 18:21:12.557700] ftl_layout.c: 691:ftl_layout_setup: *NOTICE*: [FTL][ftl0] P2L checkpoint pages: 2048 00:23:02.110 [2024-12-06 18:21:12.557710] ftl_layout.c: 692:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache chunk count 5 00:23:02.110 [2024-12-06 18:21:12.557723] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:02.110 [2024-12-06 18:21:12.557736] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize layout 00:23:02.110 [2024-12-06 18:21:12.557749] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.277 ms 00:23:02.110 [2024-12-06 18:21:12.557758] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:02.110 [2024-12-06 18:21:12.557845] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:02.110 [2024-12-06 18:21:12.557856] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Verify layout 00:23:02.110 [2024-12-06 18:21:12.557868] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.057 ms 00:23:02.110 [2024-12-06 18:21:12.557879] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:02.110 [2024-12-06 18:21:12.557989] ftl_layout.c: 768:ftl_layout_dump: *NOTICE*: [FTL][ftl0] NV cache layout: 00:23:02.110 [2024-12-06 18:21:12.558001] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb 00:23:02.110 [2024-12-06 18:21:12.558017] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:23:02.110 [2024-12-06 18:21:12.558027] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:23:02.110 [2024-12-06 18:21:12.558040] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region l2p 00:23:02.110 [2024-12-06 18:21:12.558050] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.12 MiB 00:23:02.110 [2024-12-06 18:21:12.558061] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 80.00 MiB 00:23:02.110 [2024-12-06 18:21:12.558071] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md 00:23:02.110 [2024-12-06 18:21:12.558084] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.12 MiB 00:23:02.110 [2024-12-06 18:21:12.558094] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:23:02.110 [2024-12-06 18:21:12.558105] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md_mirror 00:23:02.110 [2024-12-06 18:21:12.558118] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.62 MiB 00:23:02.110 [2024-12-06 18:21:12.558130] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:23:02.110 [2024-12-06 18:21:12.558139] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md 00:23:02.110 [2024-12-06 18:21:12.558151] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.88 MiB 00:23:02.110 [2024-12-06 18:21:12.558161] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:23:02.110 [2024-12-06 18:21:12.558175] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md_mirror 00:23:02.110 [2024-12-06 18:21:12.558185] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 114.00 MiB 00:23:02.110 [2024-12-06 18:21:12.558197] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:23:02.110 [2024-12-06 18:21:12.558206] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l0 00:23:02.110 [2024-12-06 18:21:12.558217] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 81.12 MiB 00:23:02.110 [2024-12-06 18:21:12.558226] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:23:02.110 [2024-12-06 18:21:12.558238] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l1 00:23:02.110 [2024-12-06 18:21:12.558247] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 89.12 MiB 00:23:02.110 [2024-12-06 18:21:12.558258] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:23:02.110 [2024-12-06 18:21:12.558288] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l2 00:23:02.110 [2024-12-06 18:21:12.558300] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 97.12 MiB 00:23:02.110 [2024-12-06 18:21:12.558309] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:23:02.110 [2024-12-06 18:21:12.558321] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l3 00:23:02.110 [2024-12-06 18:21:12.558330] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 105.12 MiB 00:23:02.110 [2024-12-06 18:21:12.558348] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:23:02.110 [2024-12-06 18:21:12.558358] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md 00:23:02.110 [2024-12-06 18:21:12.558372] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.12 MiB 00:23:02.110 [2024-12-06 18:21:12.558398] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:23:02.110 [2024-12-06 18:21:12.558411] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md_mirror 00:23:02.110 [2024-12-06 18:21:12.558420] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.38 MiB 00:23:02.110 [2024-12-06 18:21:12.558431] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:23:02.110 [2024-12-06 18:21:12.558441] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log 00:23:02.110 [2024-12-06 18:21:12.558453] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.62 MiB 00:23:02.110 [2024-12-06 18:21:12.558462] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:23:02.110 [2024-12-06 18:21:12.558474] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log_mirror 00:23:02.110 [2024-12-06 18:21:12.558483] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.75 MiB 00:23:02.110 [2024-12-06 18:21:12.558494] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:23:02.110 [2024-12-06 18:21:12.558505] ftl_layout.c: 775:ftl_layout_dump: *NOTICE*: [FTL][ftl0] Base device layout: 00:23:02.110 [2024-12-06 18:21:12.558518] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb_mirror 00:23:02.110 [2024-12-06 18:21:12.558527] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:23:02.110 [2024-12-06 18:21:12.558540] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:23:02.110 [2024-12-06 18:21:12.558551] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region vmap 00:23:02.110 [2024-12-06 18:21:12.558565] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 102400.25 MiB 00:23:02.110 [2024-12-06 18:21:12.558576] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 3.38 MiB 00:23:02.110 [2024-12-06 18:21:12.558588] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region data_btm 00:23:02.110 [2024-12-06 18:21:12.558598] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.25 MiB 00:23:02.110 [2024-12-06 18:21:12.558610] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 102400.00 MiB 00:23:02.110 [2024-12-06 18:21:12.558620] upgrade/ftl_sb_v5.c: 408:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - nvc: 00:23:02.110 [2024-12-06 18:21:12.558635] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x0 ver:5 blk_offs:0x0 blk_sz:0x20 00:23:02.110 [2024-12-06 18:21:12.558648] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x2 ver:0 blk_offs:0x20 blk_sz:0x5000 00:23:02.111 [2024-12-06 18:21:12.558661] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x3 ver:2 blk_offs:0x5020 blk_sz:0x80 00:23:02.111 [2024-12-06 18:21:12.558671] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x4 ver:2 blk_offs:0x50a0 blk_sz:0x80 00:23:02.111 [2024-12-06 18:21:12.558684] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xa ver:2 blk_offs:0x5120 blk_sz:0x800 00:23:02.111 [2024-12-06 18:21:12.558694] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xb ver:2 blk_offs:0x5920 blk_sz:0x800 00:23:02.111 [2024-12-06 18:21:12.558708] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xc ver:2 blk_offs:0x6120 blk_sz:0x800 00:23:02.111 [2024-12-06 18:21:12.558719] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xd ver:2 blk_offs:0x6920 blk_sz:0x800 00:23:02.111 [2024-12-06 18:21:12.558731] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xe ver:0 blk_offs:0x7120 blk_sz:0x40 00:23:02.111 [2024-12-06 18:21:12.558742] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xf ver:0 blk_offs:0x7160 blk_sz:0x40 00:23:02.111 [2024-12-06 18:21:12.558758] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x10 ver:1 blk_offs:0x71a0 blk_sz:0x20 00:23:02.111 [2024-12-06 18:21:12.558768] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x11 ver:1 blk_offs:0x71c0 blk_sz:0x20 00:23:02.111 [2024-12-06 18:21:12.558781] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x6 ver:2 blk_offs:0x71e0 blk_sz:0x20 00:23:02.111 [2024-12-06 18:21:12.558791] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x7 ver:2 blk_offs:0x7200 blk_sz:0x20 00:23:02.111 [2024-12-06 18:21:12.558804] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x7220 blk_sz:0x13c0e0 00:23:02.111 [2024-12-06 18:21:12.558814] upgrade/ftl_sb_v5.c: 422:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - base dev: 00:23:02.111 [2024-12-06 18:21:12.558828] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x1 ver:5 blk_offs:0x0 blk_sz:0x20 00:23:02.111 [2024-12-06 18:21:12.558841] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x20 blk_sz:0x20 00:23:02.111 [2024-12-06 18:21:12.558854] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x9 ver:0 blk_offs:0x40 blk_sz:0x1900000 00:23:02.111 [2024-12-06 18:21:12.558865] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x5 ver:0 blk_offs:0x1900040 blk_sz:0x360 00:23:02.111 [2024-12-06 18:21:12.558878] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x19003a0 blk_sz:0x3fc60 00:23:02.111 [2024-12-06 18:21:12.558890] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:02.111 [2024-12-06 18:21:12.558903] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Layout upgrade 00:23:02.111 [2024-12-06 18:21:12.558914] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.960 ms 00:23:02.111 [2024-12-06 18:21:12.558926] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:02.111 [2024-12-06 18:21:12.558996] mngt/ftl_mngt_misc.c: 165:ftl_mngt_scrub_nv_cache: *NOTICE*: [FTL][ftl0] NV cache data region needs scrubbing, this may take a while. 00:23:02.111 [2024-12-06 18:21:12.559014] mngt/ftl_mngt_misc.c: 166:ftl_mngt_scrub_nv_cache: *NOTICE*: [FTL][ftl0] Scrubbing 5 chunks 00:23:06.322 [2024-12-06 18:21:16.185562] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:06.322 [2024-12-06 18:21:16.185825] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Scrub NV cache 00:23:06.322 [2024-12-06 18:21:16.185922] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3632.451 ms 00:23:06.322 [2024-12-06 18:21:16.185964] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:06.322 [2024-12-06 18:21:16.227052] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:06.322 [2024-12-06 18:21:16.227305] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:23:06.322 [2024-12-06 18:21:16.227475] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 40.760 ms 00:23:06.322 [2024-12-06 18:21:16.227524] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:06.322 [2024-12-06 18:21:16.227697] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:06.322 [2024-12-06 18:21:16.227776] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize band addresses 00:23:06.322 [2024-12-06 18:21:16.227860] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.055 ms 00:23:06.322 [2024-12-06 18:21:16.227900] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:06.322 [2024-12-06 18:21:16.289240] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:06.322 [2024-12-06 18:21:16.289462] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:23:06.322 [2024-12-06 18:21:16.289589] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 61.338 ms 00:23:06.322 [2024-12-06 18:21:16.289634] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:06.322 [2024-12-06 18:21:16.289702] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:06.322 [2024-12-06 18:21:16.289737] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:23:06.322 [2024-12-06 18:21:16.289812] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.003 ms 00:23:06.322 [2024-12-06 18:21:16.289850] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:06.322 [2024-12-06 18:21:16.290392] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:06.322 [2024-12-06 18:21:16.290530] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:23:06.322 [2024-12-06 18:21:16.290615] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.444 ms 00:23:06.322 [2024-12-06 18:21:16.290658] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:06.322 [2024-12-06 18:21:16.290811] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:06.322 [2024-12-06 18:21:16.290853] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:23:06.322 [2024-12-06 18:21:16.290949] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.099 ms 00:23:06.322 [2024-12-06 18:21:16.290993] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:06.322 [2024-12-06 18:21:16.312511] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:06.322 [2024-12-06 18:21:16.312678] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:23:06.322 [2024-12-06 18:21:16.312821] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 21.502 ms 00:23:06.322 [2024-12-06 18:21:16.312842] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:06.322 [2024-12-06 18:21:16.325816] ftl_l2p_cache.c: 458:ftl_l2p_cache_init: *NOTICE*: l2p maximum resident size is: 59 (of 60) MiB 00:23:06.322 [2024-12-06 18:21:16.342434] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:06.322 [2024-12-06 18:21:16.342479] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize L2P 00:23:06.322 [2024-12-06 18:21:16.342500] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 29.520 ms 00:23:06.322 [2024-12-06 18:21:16.342527] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:06.322 [2024-12-06 18:21:16.430281] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:06.322 [2024-12-06 18:21:16.430351] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Clear L2P 00:23:06.322 [2024-12-06 18:21:16.430391] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 87.836 ms 00:23:06.322 [2024-12-06 18:21:16.430402] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:06.322 [2024-12-06 18:21:16.430604] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:06.322 [2024-12-06 18:21:16.430618] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize band initialization 00:23:06.322 [2024-12-06 18:21:16.430636] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.151 ms 00:23:06.322 [2024-12-06 18:21:16.430646] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:06.322 [2024-12-06 18:21:16.467377] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:06.322 [2024-12-06 18:21:16.467528] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Save initial band info metadata 00:23:06.322 [2024-12-06 18:21:16.467555] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 36.726 ms 00:23:06.322 [2024-12-06 18:21:16.467566] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:06.322 [2024-12-06 18:21:16.504195] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:06.322 [2024-12-06 18:21:16.504235] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Save initial chunk info metadata 00:23:06.322 [2024-12-06 18:21:16.504254] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 36.621 ms 00:23:06.322 [2024-12-06 18:21:16.504276] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:06.322 [2024-12-06 18:21:16.505056] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:06.322 [2024-12-06 18:21:16.505085] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize P2L checkpointing 00:23:06.322 [2024-12-06 18:21:16.505100] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.730 ms 00:23:06.322 [2024-12-06 18:21:16.505110] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:06.322 [2024-12-06 18:21:16.606172] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:06.322 [2024-12-06 18:21:16.606236] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Wipe P2L region 00:23:06.322 [2024-12-06 18:21:16.606259] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 101.152 ms 00:23:06.322 [2024-12-06 18:21:16.606281] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:06.322 [2024-12-06 18:21:16.645393] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:06.322 [2024-12-06 18:21:16.645581] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Clear trim map 00:23:06.322 [2024-12-06 18:21:16.645610] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 39.056 ms 00:23:06.322 [2024-12-06 18:21:16.645621] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:06.322 [2024-12-06 18:21:16.682334] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:06.322 [2024-12-06 18:21:16.682392] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Clear trim log 00:23:06.322 [2024-12-06 18:21:16.682410] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 36.721 ms 00:23:06.322 [2024-12-06 18:21:16.682420] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:06.322 [2024-12-06 18:21:16.718723] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:06.322 [2024-12-06 18:21:16.718764] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL dirty state 00:23:06.322 [2024-12-06 18:21:16.718781] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 36.310 ms 00:23:06.322 [2024-12-06 18:21:16.718792] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:06.322 [2024-12-06 18:21:16.718845] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:06.322 [2024-12-06 18:21:16.718857] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Start core poller 00:23:06.322 [2024-12-06 18:21:16.718884] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.006 ms 00:23:06.322 [2024-12-06 18:21:16.718894] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:06.322 [2024-12-06 18:21:16.719013] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:06.322 [2024-12-06 18:21:16.719027] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize initialization 00:23:06.323 [2024-12-06 18:21:16.719039] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.040 ms 00:23:06.323 [2024-12-06 18:21:16.719050] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:06.323 [2024-12-06 18:21:16.720111] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL startup', duration = 4185.801 ms, result 0 00:23:06.323 { 00:23:06.323 "name": "ftl0", 00:23:06.323 "uuid": "a111d682-7c0f-402b-a7a8-41abac3647fe" 00:23:06.323 } 00:23:06.323 18:21:16 ftl.ftl_fio_basic -- ftl/fio.sh@65 -- # waitforbdev ftl0 00:23:06.323 18:21:16 ftl.ftl_fio_basic -- common/autotest_common.sh@903 -- # local bdev_name=ftl0 00:23:06.323 18:21:16 ftl.ftl_fio_basic -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:23:06.323 18:21:16 ftl.ftl_fio_basic -- common/autotest_common.sh@905 -- # local i 00:23:06.323 18:21:16 ftl.ftl_fio_basic -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:23:06.323 18:21:16 ftl.ftl_fio_basic -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:23:06.323 18:21:16 ftl.ftl_fio_basic -- common/autotest_common.sh@908 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_wait_for_examine 00:23:06.581 18:21:16 ftl.ftl_fio_basic -- common/autotest_common.sh@910 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b ftl0 -t 2000 00:23:06.581 [ 00:23:06.581 { 00:23:06.581 "name": "ftl0", 00:23:06.581 "aliases": [ 00:23:06.581 "a111d682-7c0f-402b-a7a8-41abac3647fe" 00:23:06.581 ], 00:23:06.581 "product_name": "FTL disk", 00:23:06.581 "block_size": 4096, 00:23:06.581 "num_blocks": 20971520, 00:23:06.581 "uuid": "a111d682-7c0f-402b-a7a8-41abac3647fe", 00:23:06.581 "assigned_rate_limits": { 00:23:06.581 "rw_ios_per_sec": 0, 00:23:06.581 "rw_mbytes_per_sec": 0, 00:23:06.581 "r_mbytes_per_sec": 0, 00:23:06.581 "w_mbytes_per_sec": 0 00:23:06.581 }, 00:23:06.581 "claimed": false, 00:23:06.581 "zoned": false, 00:23:06.581 "supported_io_types": { 00:23:06.581 "read": true, 00:23:06.581 "write": true, 00:23:06.581 "unmap": true, 00:23:06.581 "flush": true, 00:23:06.581 "reset": false, 00:23:06.581 "nvme_admin": false, 00:23:06.581 "nvme_io": false, 00:23:06.581 "nvme_io_md": false, 00:23:06.581 "write_zeroes": true, 00:23:06.581 "zcopy": false, 00:23:06.581 "get_zone_info": false, 00:23:06.581 "zone_management": false, 00:23:06.581 "zone_append": false, 00:23:06.581 "compare": false, 00:23:06.581 "compare_and_write": false, 00:23:06.581 "abort": false, 00:23:06.581 "seek_hole": false, 00:23:06.581 "seek_data": false, 00:23:06.581 "copy": false, 00:23:06.581 "nvme_iov_md": false 00:23:06.581 }, 00:23:06.581 "driver_specific": { 00:23:06.581 "ftl": { 00:23:06.581 "base_bdev": "bff7f7f6-3914-40e9-9793-9ee67cf4bace", 00:23:06.581 "cache": "nvc0n1p0" 00:23:06.581 } 00:23:06.581 } 00:23:06.581 } 00:23:06.581 ] 00:23:06.581 18:21:17 ftl.ftl_fio_basic -- common/autotest_common.sh@911 -- # return 0 00:23:06.581 18:21:17 ftl.ftl_fio_basic -- ftl/fio.sh@68 -- # echo '{"subsystems": [' 00:23:06.582 18:21:17 ftl.ftl_fio_basic -- ftl/fio.sh@69 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py save_subsystem_config -n bdev 00:23:06.840 18:21:17 ftl.ftl_fio_basic -- ftl/fio.sh@70 -- # echo ']}' 00:23:06.840 18:21:17 ftl.ftl_fio_basic -- ftl/fio.sh@73 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_ftl_unload -b ftl0 00:23:07.099 [2024-12-06 18:21:17.547331] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:07.099 [2024-12-06 18:21:17.547604] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinit core IO channel 00:23:07.099 [2024-12-06 18:21:17.547631] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.004 ms 00:23:07.099 [2024-12-06 18:21:17.547645] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:07.099 [2024-12-06 18:21:17.547694] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:23:07.099 [2024-12-06 18:21:17.551971] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:07.099 [2024-12-06 18:21:17.552006] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Unregister IO device 00:23:07.099 [2024-12-06 18:21:17.552021] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.261 ms 00:23:07.099 [2024-12-06 18:21:17.552032] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:07.099 [2024-12-06 18:21:17.552505] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:07.099 [2024-12-06 18:21:17.552525] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Stop core poller 00:23:07.099 [2024-12-06 18:21:17.552538] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.430 ms 00:23:07.099 [2024-12-06 18:21:17.552549] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:07.099 [2024-12-06 18:21:17.555082] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:07.099 [2024-12-06 18:21:17.555223] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist L2P 00:23:07.099 [2024-12-06 18:21:17.555247] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.507 ms 00:23:07.099 [2024-12-06 18:21:17.555258] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:07.099 [2024-12-06 18:21:17.560294] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:07.099 [2024-12-06 18:21:17.560327] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finish L2P trims 00:23:07.099 [2024-12-06 18:21:17.560341] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.994 ms 00:23:07.099 [2024-12-06 18:21:17.560367] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:07.099 [2024-12-06 18:21:17.597334] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:07.099 [2024-12-06 18:21:17.597372] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist NV cache metadata 00:23:07.099 [2024-12-06 18:21:17.597421] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 36.948 ms 00:23:07.099 [2024-12-06 18:21:17.597431] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:07.099 [2024-12-06 18:21:17.619033] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:07.099 [2024-12-06 18:21:17.619086] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist valid map metadata 00:23:07.099 [2024-12-06 18:21:17.619107] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 21.578 ms 00:23:07.099 [2024-12-06 18:21:17.619135] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:07.099 [2024-12-06 18:21:17.619357] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:07.099 [2024-12-06 18:21:17.619372] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist P2L metadata 00:23:07.099 [2024-12-06 18:21:17.619404] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.166 ms 00:23:07.099 [2024-12-06 18:21:17.619414] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:07.099 [2024-12-06 18:21:17.656790] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:07.099 [2024-12-06 18:21:17.656829] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist band info metadata 00:23:07.099 [2024-12-06 18:21:17.656846] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 37.405 ms 00:23:07.099 [2024-12-06 18:21:17.656857] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:07.361 [2024-12-06 18:21:17.693765] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:07.361 [2024-12-06 18:21:17.693803] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist trim metadata 00:23:07.361 [2024-12-06 18:21:17.693820] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 36.917 ms 00:23:07.361 [2024-12-06 18:21:17.693830] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:07.361 [2024-12-06 18:21:17.730414] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:07.361 [2024-12-06 18:21:17.730453] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist superblock 00:23:07.361 [2024-12-06 18:21:17.730469] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 36.586 ms 00:23:07.361 [2024-12-06 18:21:17.730480] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:07.361 [2024-12-06 18:21:17.766710] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:07.361 [2024-12-06 18:21:17.766750] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL clean state 00:23:07.361 [2024-12-06 18:21:17.766767] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 36.162 ms 00:23:07.361 [2024-12-06 18:21:17.766777] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:07.361 [2024-12-06 18:21:17.766829] ftl_debug.c: 165:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Bands validity: 00:23:07.361 [2024-12-06 18:21:17.766847] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 1: 0 / 261120 wr_cnt: 0 state: free 00:23:07.361 [2024-12-06 18:21:17.766863] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 2: 0 / 261120 wr_cnt: 0 state: free 00:23:07.361 [2024-12-06 18:21:17.766875] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 3: 0 / 261120 wr_cnt: 0 state: free 00:23:07.361 [2024-12-06 18:21:17.766889] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 4: 0 / 261120 wr_cnt: 0 state: free 00:23:07.361 [2024-12-06 18:21:17.766900] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 5: 0 / 261120 wr_cnt: 0 state: free 00:23:07.361 [2024-12-06 18:21:17.766913] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 6: 0 / 261120 wr_cnt: 0 state: free 00:23:07.361 [2024-12-06 18:21:17.766924] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 7: 0 / 261120 wr_cnt: 0 state: free 00:23:07.361 [2024-12-06 18:21:17.766941] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 8: 0 / 261120 wr_cnt: 0 state: free 00:23:07.361 [2024-12-06 18:21:17.766952] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 9: 0 / 261120 wr_cnt: 0 state: free 00:23:07.361 [2024-12-06 18:21:17.766965] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 10: 0 / 261120 wr_cnt: 0 state: free 00:23:07.361 [2024-12-06 18:21:17.766976] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 11: 0 / 261120 wr_cnt: 0 state: free 00:23:07.361 [2024-12-06 18:21:17.766989] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 12: 0 / 261120 wr_cnt: 0 state: free 00:23:07.361 [2024-12-06 18:21:17.767000] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 13: 0 / 261120 wr_cnt: 0 state: free 00:23:07.361 [2024-12-06 18:21:17.767019] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 14: 0 / 261120 wr_cnt: 0 state: free 00:23:07.361 [2024-12-06 18:21:17.767030] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 15: 0 / 261120 wr_cnt: 0 state: free 00:23:07.361 [2024-12-06 18:21:17.767046] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 16: 0 / 261120 wr_cnt: 0 state: free 00:23:07.361 [2024-12-06 18:21:17.767057] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 17: 0 / 261120 wr_cnt: 0 state: free 00:23:07.361 [2024-12-06 18:21:17.767072] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 18: 0 / 261120 wr_cnt: 0 state: free 00:23:07.361 [2024-12-06 18:21:17.767084] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 19: 0 / 261120 wr_cnt: 0 state: free 00:23:07.361 [2024-12-06 18:21:17.767099] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 20: 0 / 261120 wr_cnt: 0 state: free 00:23:07.361 [2024-12-06 18:21:17.767110] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 21: 0 / 261120 wr_cnt: 0 state: free 00:23:07.361 [2024-12-06 18:21:17.767125] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 22: 0 / 261120 wr_cnt: 0 state: free 00:23:07.361 [2024-12-06 18:21:17.767136] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 23: 0 / 261120 wr_cnt: 0 state: free 00:23:07.361 [2024-12-06 18:21:17.767151] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 24: 0 / 261120 wr_cnt: 0 state: free 00:23:07.361 [2024-12-06 18:21:17.767162] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 25: 0 / 261120 wr_cnt: 0 state: free 00:23:07.361 [2024-12-06 18:21:17.767175] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 26: 0 / 261120 wr_cnt: 0 state: free 00:23:07.361 [2024-12-06 18:21:17.767188] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 27: 0 / 261120 wr_cnt: 0 state: free 00:23:07.361 [2024-12-06 18:21:17.767201] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 28: 0 / 261120 wr_cnt: 0 state: free 00:23:07.361 [2024-12-06 18:21:17.767212] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 29: 0 / 261120 wr_cnt: 0 state: free 00:23:07.361 [2024-12-06 18:21:17.767225] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 30: 0 / 261120 wr_cnt: 0 state: free 00:23:07.361 [2024-12-06 18:21:17.767239] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 31: 0 / 261120 wr_cnt: 0 state: free 00:23:07.361 [2024-12-06 18:21:17.767252] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 32: 0 / 261120 wr_cnt: 0 state: free 00:23:07.361 [2024-12-06 18:21:17.767279] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 33: 0 / 261120 wr_cnt: 0 state: free 00:23:07.361 [2024-12-06 18:21:17.767294] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 34: 0 / 261120 wr_cnt: 0 state: free 00:23:07.361 [2024-12-06 18:21:17.767305] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 35: 0 / 261120 wr_cnt: 0 state: free 00:23:07.361 [2024-12-06 18:21:17.767318] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 36: 0 / 261120 wr_cnt: 0 state: free 00:23:07.361 [2024-12-06 18:21:17.767329] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 37: 0 / 261120 wr_cnt: 0 state: free 00:23:07.361 [2024-12-06 18:21:17.767343] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 38: 0 / 261120 wr_cnt: 0 state: free 00:23:07.361 [2024-12-06 18:21:17.767354] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 39: 0 / 261120 wr_cnt: 0 state: free 00:23:07.361 [2024-12-06 18:21:17.767369] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 40: 0 / 261120 wr_cnt: 0 state: free 00:23:07.361 [2024-12-06 18:21:17.767380] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 41: 0 / 261120 wr_cnt: 0 state: free 00:23:07.361 [2024-12-06 18:21:17.767393] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 42: 0 / 261120 wr_cnt: 0 state: free 00:23:07.361 [2024-12-06 18:21:17.767403] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 43: 0 / 261120 wr_cnt: 0 state: free 00:23:07.361 [2024-12-06 18:21:17.767416] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 44: 0 / 261120 wr_cnt: 0 state: free 00:23:07.361 [2024-12-06 18:21:17.767426] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 45: 0 / 261120 wr_cnt: 0 state: free 00:23:07.362 [2024-12-06 18:21:17.767439] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 46: 0 / 261120 wr_cnt: 0 state: free 00:23:07.362 [2024-12-06 18:21:17.767450] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 47: 0 / 261120 wr_cnt: 0 state: free 00:23:07.362 [2024-12-06 18:21:17.767465] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 48: 0 / 261120 wr_cnt: 0 state: free 00:23:07.362 [2024-12-06 18:21:17.767476] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 49: 0 / 261120 wr_cnt: 0 state: free 00:23:07.362 [2024-12-06 18:21:17.767489] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 50: 0 / 261120 wr_cnt: 0 state: free 00:23:07.362 [2024-12-06 18:21:17.767499] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 51: 0 / 261120 wr_cnt: 0 state: free 00:23:07.362 [2024-12-06 18:21:17.767512] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 52: 0 / 261120 wr_cnt: 0 state: free 00:23:07.362 [2024-12-06 18:21:17.767523] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 53: 0 / 261120 wr_cnt: 0 state: free 00:23:07.362 [2024-12-06 18:21:17.767536] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 54: 0 / 261120 wr_cnt: 0 state: free 00:23:07.362 [2024-12-06 18:21:17.767546] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 55: 0 / 261120 wr_cnt: 0 state: free 00:23:07.362 [2024-12-06 18:21:17.767562] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 56: 0 / 261120 wr_cnt: 0 state: free 00:23:07.362 [2024-12-06 18:21:17.767573] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 57: 0 / 261120 wr_cnt: 0 state: free 00:23:07.362 [2024-12-06 18:21:17.767585] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 58: 0 / 261120 wr_cnt: 0 state: free 00:23:07.362 [2024-12-06 18:21:17.767596] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 59: 0 / 261120 wr_cnt: 0 state: free 00:23:07.362 [2024-12-06 18:21:17.767609] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 60: 0 / 261120 wr_cnt: 0 state: free 00:23:07.362 [2024-12-06 18:21:17.767620] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 61: 0 / 261120 wr_cnt: 0 state: free 00:23:07.362 [2024-12-06 18:21:17.767632] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 62: 0 / 261120 wr_cnt: 0 state: free 00:23:07.362 [2024-12-06 18:21:17.767644] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 63: 0 / 261120 wr_cnt: 0 state: free 00:23:07.362 [2024-12-06 18:21:17.767659] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 64: 0 / 261120 wr_cnt: 0 state: free 00:23:07.362 [2024-12-06 18:21:17.767669] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 65: 0 / 261120 wr_cnt: 0 state: free 00:23:07.362 [2024-12-06 18:21:17.767682] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 66: 0 / 261120 wr_cnt: 0 state: free 00:23:07.362 [2024-12-06 18:21:17.767692] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 67: 0 / 261120 wr_cnt: 0 state: free 00:23:07.362 [2024-12-06 18:21:17.767705] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 68: 0 / 261120 wr_cnt: 0 state: free 00:23:07.362 [2024-12-06 18:21:17.767716] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 69: 0 / 261120 wr_cnt: 0 state: free 00:23:07.362 [2024-12-06 18:21:17.767728] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 70: 0 / 261120 wr_cnt: 0 state: free 00:23:07.362 [2024-12-06 18:21:17.767739] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 71: 0 / 261120 wr_cnt: 0 state: free 00:23:07.362 [2024-12-06 18:21:17.767754] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 72: 0 / 261120 wr_cnt: 0 state: free 00:23:07.362 [2024-12-06 18:21:17.767764] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 73: 0 / 261120 wr_cnt: 0 state: free 00:23:07.362 [2024-12-06 18:21:17.767778] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 74: 0 / 261120 wr_cnt: 0 state: free 00:23:07.362 [2024-12-06 18:21:17.767789] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 75: 0 / 261120 wr_cnt: 0 state: free 00:23:07.362 [2024-12-06 18:21:17.767802] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 76: 0 / 261120 wr_cnt: 0 state: free 00:23:07.362 [2024-12-06 18:21:17.767812] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 77: 0 / 261120 wr_cnt: 0 state: free 00:23:07.362 [2024-12-06 18:21:17.767826] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 78: 0 / 261120 wr_cnt: 0 state: free 00:23:07.362 [2024-12-06 18:21:17.767837] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 79: 0 / 261120 wr_cnt: 0 state: free 00:23:07.362 [2024-12-06 18:21:17.767850] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 80: 0 / 261120 wr_cnt: 0 state: free 00:23:07.362 [2024-12-06 18:21:17.767860] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 81: 0 / 261120 wr_cnt: 0 state: free 00:23:07.362 [2024-12-06 18:21:17.767889] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 82: 0 / 261120 wr_cnt: 0 state: free 00:23:07.362 [2024-12-06 18:21:17.767900] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 83: 0 / 261120 wr_cnt: 0 state: free 00:23:07.362 [2024-12-06 18:21:17.767913] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 84: 0 / 261120 wr_cnt: 0 state: free 00:23:07.362 [2024-12-06 18:21:17.767925] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 85: 0 / 261120 wr_cnt: 0 state: free 00:23:07.362 [2024-12-06 18:21:17.767938] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 86: 0 / 261120 wr_cnt: 0 state: free 00:23:07.362 [2024-12-06 18:21:17.767948] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 87: 0 / 261120 wr_cnt: 0 state: free 00:23:07.362 [2024-12-06 18:21:17.767964] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 88: 0 / 261120 wr_cnt: 0 state: free 00:23:07.362 [2024-12-06 18:21:17.767974] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 89: 0 / 261120 wr_cnt: 0 state: free 00:23:07.362 [2024-12-06 18:21:17.767988] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 90: 0 / 261120 wr_cnt: 0 state: free 00:23:07.362 [2024-12-06 18:21:17.767999] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 91: 0 / 261120 wr_cnt: 0 state: free 00:23:07.362 [2024-12-06 18:21:17.768012] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 92: 0 / 261120 wr_cnt: 0 state: free 00:23:07.362 [2024-12-06 18:21:17.768022] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 93: 0 / 261120 wr_cnt: 0 state: free 00:23:07.362 [2024-12-06 18:21:17.768035] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 94: 0 / 261120 wr_cnt: 0 state: free 00:23:07.362 [2024-12-06 18:21:17.768048] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 95: 0 / 261120 wr_cnt: 0 state: free 00:23:07.362 [2024-12-06 18:21:17.768062] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 96: 0 / 261120 wr_cnt: 0 state: free 00:23:07.362 [2024-12-06 18:21:17.768073] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 97: 0 / 261120 wr_cnt: 0 state: free 00:23:07.362 [2024-12-06 18:21:17.768086] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 98: 0 / 261120 wr_cnt: 0 state: free 00:23:07.362 [2024-12-06 18:21:17.768097] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 99: 0 / 261120 wr_cnt: 0 state: free 00:23:07.362 [2024-12-06 18:21:17.768111] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 100: 0 / 261120 wr_cnt: 0 state: free 00:23:07.362 [2024-12-06 18:21:17.768129] ftl_debug.c: 211:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] 00:23:07.362 [2024-12-06 18:21:17.768142] ftl_debug.c: 212:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] device UUID: a111d682-7c0f-402b-a7a8-41abac3647fe 00:23:07.362 [2024-12-06 18:21:17.768153] ftl_debug.c: 213:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total valid LBAs: 0 00:23:07.362 [2024-12-06 18:21:17.768168] ftl_debug.c: 214:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total writes: 960 00:23:07.362 [2024-12-06 18:21:17.768178] ftl_debug.c: 215:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] user writes: 0 00:23:07.362 [2024-12-06 18:21:17.768193] ftl_debug.c: 216:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] WAF: inf 00:23:07.362 [2024-12-06 18:21:17.768203] ftl_debug.c: 218:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] limits: 00:23:07.362 [2024-12-06 18:21:17.768216] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] crit: 0 00:23:07.362 [2024-12-06 18:21:17.768226] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] high: 0 00:23:07.362 [2024-12-06 18:21:17.768237] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] low: 0 00:23:07.362 [2024-12-06 18:21:17.768246] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] start: 0 00:23:07.362 [2024-12-06 18:21:17.768258] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:07.362 [2024-12-06 18:21:17.768277] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Dump statistics 00:23:07.362 [2024-12-06 18:21:17.768291] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.433 ms 00:23:07.362 [2024-12-06 18:21:17.768301] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:07.362 [2024-12-06 18:21:17.788509] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:07.362 [2024-12-06 18:21:17.788548] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize L2P 00:23:07.362 [2024-12-06 18:21:17.788564] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 20.173 ms 00:23:07.362 [2024-12-06 18:21:17.788575] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:07.362 [2024-12-06 18:21:17.789117] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:07.362 [2024-12-06 18:21:17.789131] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize P2L checkpointing 00:23:07.362 [2024-12-06 18:21:17.789145] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.506 ms 00:23:07.362 [2024-12-06 18:21:17.789155] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:07.362 [2024-12-06 18:21:17.858006] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:23:07.362 [2024-12-06 18:21:17.858169] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:23:07.362 [2024-12-06 18:21:17.858196] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:23:07.362 [2024-12-06 18:21:17.858208] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:07.362 [2024-12-06 18:21:17.858302] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:23:07.362 [2024-12-06 18:21:17.858313] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:23:07.362 [2024-12-06 18:21:17.858335] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:23:07.362 [2024-12-06 18:21:17.858353] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:07.362 [2024-12-06 18:21:17.858483] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:23:07.362 [2024-12-06 18:21:17.858500] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:23:07.362 [2024-12-06 18:21:17.858513] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:23:07.362 [2024-12-06 18:21:17.858523] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:07.362 [2024-12-06 18:21:17.858556] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:23:07.362 [2024-12-06 18:21:17.858567] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:23:07.362 [2024-12-06 18:21:17.858579] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:23:07.362 [2024-12-06 18:21:17.858589] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:07.621 [2024-12-06 18:21:17.992121] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:23:07.621 [2024-12-06 18:21:17.992185] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:23:07.621 [2024-12-06 18:21:17.992203] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:23:07.621 [2024-12-06 18:21:17.992214] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:07.621 [2024-12-06 18:21:18.092936] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:23:07.621 [2024-12-06 18:21:18.092998] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:23:07.621 [2024-12-06 18:21:18.093016] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:23:07.621 [2024-12-06 18:21:18.093043] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:07.621 [2024-12-06 18:21:18.093164] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:23:07.621 [2024-12-06 18:21:18.093176] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:23:07.621 [2024-12-06 18:21:18.093193] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:23:07.621 [2024-12-06 18:21:18.093203] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:07.621 [2024-12-06 18:21:18.093313] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:23:07.621 [2024-12-06 18:21:18.093326] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:23:07.621 [2024-12-06 18:21:18.093340] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:23:07.621 [2024-12-06 18:21:18.093349] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:07.621 [2024-12-06 18:21:18.093503] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:23:07.621 [2024-12-06 18:21:18.093517] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:23:07.621 [2024-12-06 18:21:18.093530] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:23:07.621 [2024-12-06 18:21:18.093543] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:07.621 [2024-12-06 18:21:18.093601] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:23:07.621 [2024-12-06 18:21:18.093614] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize superblock 00:23:07.621 [2024-12-06 18:21:18.093627] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:23:07.621 [2024-12-06 18:21:18.093637] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:07.621 [2024-12-06 18:21:18.093685] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:23:07.621 [2024-12-06 18:21:18.093696] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:23:07.621 [2024-12-06 18:21:18.093708] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:23:07.621 [2024-12-06 18:21:18.093720] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:07.621 [2024-12-06 18:21:18.093779] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:23:07.621 [2024-12-06 18:21:18.093791] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:23:07.621 [2024-12-06 18:21:18.093804] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:23:07.621 [2024-12-06 18:21:18.093813] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:07.621 [2024-12-06 18:21:18.093978] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL shutdown', duration = 547.500 ms, result 0 00:23:07.621 true 00:23:07.621 18:21:18 ftl.ftl_fio_basic -- ftl/fio.sh@75 -- # killprocess 76878 00:23:07.621 18:21:18 ftl.ftl_fio_basic -- common/autotest_common.sh@954 -- # '[' -z 76878 ']' 00:23:07.621 18:21:18 ftl.ftl_fio_basic -- common/autotest_common.sh@958 -- # kill -0 76878 00:23:07.621 18:21:18 ftl.ftl_fio_basic -- common/autotest_common.sh@959 -- # uname 00:23:07.621 18:21:18 ftl.ftl_fio_basic -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:23:07.621 18:21:18 ftl.ftl_fio_basic -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 76878 00:23:07.621 killing process with pid 76878 00:23:07.621 18:21:18 ftl.ftl_fio_basic -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:23:07.621 18:21:18 ftl.ftl_fio_basic -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:23:07.621 18:21:18 ftl.ftl_fio_basic -- common/autotest_common.sh@972 -- # echo 'killing process with pid 76878' 00:23:07.621 18:21:18 ftl.ftl_fio_basic -- common/autotest_common.sh@973 -- # kill 76878 00:23:07.621 18:21:18 ftl.ftl_fio_basic -- common/autotest_common.sh@978 -- # wait 76878 00:23:12.890 18:21:22 ftl.ftl_fio_basic -- ftl/fio.sh@76 -- # trap - SIGINT SIGTERM EXIT 00:23:12.890 18:21:22 ftl.ftl_fio_basic -- ftl/fio.sh@78 -- # for test in ${tests} 00:23:12.890 18:21:22 ftl.ftl_fio_basic -- ftl/fio.sh@79 -- # timing_enter randw-verify 00:23:12.890 18:21:22 ftl.ftl_fio_basic -- common/autotest_common.sh@726 -- # xtrace_disable 00:23:12.890 18:21:22 ftl.ftl_fio_basic -- common/autotest_common.sh@10 -- # set +x 00:23:12.890 18:21:22 ftl.ftl_fio_basic -- ftl/fio.sh@80 -- # fio_bdev /home/vagrant/spdk_repo/spdk/test/ftl/config/fio/randw-verify.fio 00:23:12.890 18:21:22 ftl.ftl_fio_basic -- common/autotest_common.sh@1360 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev /home/vagrant/spdk_repo/spdk/test/ftl/config/fio/randw-verify.fio 00:23:12.890 18:21:22 ftl.ftl_fio_basic -- common/autotest_common.sh@1341 -- # local fio_dir=/usr/src/fio 00:23:12.890 18:21:22 ftl.ftl_fio_basic -- common/autotest_common.sh@1343 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:23:12.890 18:21:22 ftl.ftl_fio_basic -- common/autotest_common.sh@1343 -- # local sanitizers 00:23:12.890 18:21:22 ftl.ftl_fio_basic -- common/autotest_common.sh@1344 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:23:12.890 18:21:22 ftl.ftl_fio_basic -- common/autotest_common.sh@1345 -- # shift 00:23:12.890 18:21:22 ftl.ftl_fio_basic -- common/autotest_common.sh@1347 -- # local asan_lib= 00:23:12.890 18:21:22 ftl.ftl_fio_basic -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 00:23:12.890 18:21:22 ftl.ftl_fio_basic -- common/autotest_common.sh@1349 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:23:12.890 18:21:22 ftl.ftl_fio_basic -- common/autotest_common.sh@1349 -- # grep libasan 00:23:12.890 18:21:22 ftl.ftl_fio_basic -- common/autotest_common.sh@1349 -- # awk '{print $3}' 00:23:12.890 18:21:23 ftl.ftl_fio_basic -- common/autotest_common.sh@1349 -- # asan_lib=/usr/lib64/libasan.so.8 00:23:12.890 18:21:23 ftl.ftl_fio_basic -- common/autotest_common.sh@1350 -- # [[ -n /usr/lib64/libasan.so.8 ]] 00:23:12.890 18:21:23 ftl.ftl_fio_basic -- common/autotest_common.sh@1351 -- # break 00:23:12.890 18:21:23 ftl.ftl_fio_basic -- common/autotest_common.sh@1356 -- # LD_PRELOAD='/usr/lib64/libasan.so.8 /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev' 00:23:12.890 18:21:23 ftl.ftl_fio_basic -- common/autotest_common.sh@1356 -- # /usr/src/fio/fio /home/vagrant/spdk_repo/spdk/test/ftl/config/fio/randw-verify.fio 00:23:12.890 test: (g=0): rw=randwrite, bs=(R) 68.0KiB-68.0KiB, (W) 68.0KiB-68.0KiB, (T) 68.0KiB-68.0KiB, ioengine=spdk_bdev, iodepth=1 00:23:12.890 fio-3.35 00:23:12.890 Starting 1 thread 00:23:18.190 00:23:18.190 test: (groupid=0, jobs=1): err= 0: pid=77093: Fri Dec 6 18:21:28 2024 00:23:18.190 read: IOPS=944, BW=62.7MiB/s (65.8MB/s)(255MiB/4058msec) 00:23:18.190 slat (nsec): min=4565, max=36335, avg=6264.82, stdev=2631.78 00:23:18.190 clat (usec): min=320, max=777, avg=482.75, stdev=53.51 00:23:18.190 lat (usec): min=325, max=784, avg=489.02, stdev=53.87 00:23:18.190 clat percentiles (usec): 00:23:18.190 | 1.00th=[ 347], 5.00th=[ 392], 10.00th=[ 396], 20.00th=[ 453], 00:23:18.190 | 30.00th=[ 461], 40.00th=[ 465], 50.00th=[ 474], 60.00th=[ 519], 00:23:18.190 | 70.00th=[ 529], 80.00th=[ 529], 90.00th=[ 537], 95.00th=[ 545], 00:23:18.190 | 99.00th=[ 594], 99.50th=[ 611], 99.90th=[ 652], 99.95th=[ 685], 00:23:18.190 | 99.99th=[ 775] 00:23:18.190 write: IOPS=951, BW=63.2MiB/s (66.2MB/s)(256MiB/4054msec); 0 zone resets 00:23:18.190 slat (nsec): min=15931, max=85688, avg=19571.23, stdev=4821.07 00:23:18.190 clat (usec): min=354, max=1133, avg=535.85, stdev=79.76 00:23:18.190 lat (usec): min=378, max=1160, avg=555.42, stdev=80.90 00:23:18.190 clat percentiles (usec): 00:23:18.190 | 1.00th=[ 412], 5.00th=[ 429], 10.00th=[ 474], 20.00th=[ 482], 00:23:18.190 | 30.00th=[ 486], 40.00th=[ 529], 50.00th=[ 545], 60.00th=[ 545], 00:23:18.190 | 70.00th=[ 553], 80.00th=[ 562], 90.00th=[ 611], 95.00th=[ 627], 00:23:18.190 | 99.00th=[ 963], 99.50th=[ 988], 99.90th=[ 1106], 99.95th=[ 1123], 00:23:18.190 | 99.99th=[ 1139] 00:23:18.190 bw ( KiB/s): min=59160, max=66912, per=100.00%, avg=64753.00, stdev=2434.40, samples=8 00:23:18.190 iops : min= 870, max= 984, avg=952.25, stdev=35.80, samples=8 00:23:18.190 lat (usec) : 500=46.48%, 750=52.50%, 1000=0.87% 00:23:18.190 lat (msec) : 2=0.14% 00:23:18.190 cpu : usr=99.16%, sys=0.20%, ctx=8, majf=0, minf=1169 00:23:18.190 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:23:18.190 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:23:18.190 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:23:18.190 issued rwts: total=3833,3856,0,0 short=0,0,0,0 dropped=0,0,0,0 00:23:18.190 latency : target=0, window=0, percentile=100.00%, depth=1 00:23:18.190 00:23:18.190 Run status group 0 (all jobs): 00:23:18.190 READ: bw=62.7MiB/s (65.8MB/s), 62.7MiB/s-62.7MiB/s (65.8MB/s-65.8MB/s), io=255MiB (267MB), run=4058-4058msec 00:23:18.190 WRITE: bw=63.2MiB/s (66.2MB/s), 63.2MiB/s-63.2MiB/s (66.2MB/s-66.2MB/s), io=256MiB (269MB), run=4054-4054msec 00:23:20.119 ----------------------------------------------------- 00:23:20.119 Suppressions used: 00:23:20.119 count bytes template 00:23:20.119 1 5 /usr/src/fio/parse.c 00:23:20.119 1 8 libtcmalloc_minimal.so 00:23:20.119 1 904 libcrypto.so 00:23:20.119 ----------------------------------------------------- 00:23:20.119 00:23:20.119 18:21:30 ftl.ftl_fio_basic -- ftl/fio.sh@81 -- # timing_exit randw-verify 00:23:20.119 18:21:30 ftl.ftl_fio_basic -- common/autotest_common.sh@732 -- # xtrace_disable 00:23:20.119 18:21:30 ftl.ftl_fio_basic -- common/autotest_common.sh@10 -- # set +x 00:23:20.119 18:21:30 ftl.ftl_fio_basic -- ftl/fio.sh@78 -- # for test in ${tests} 00:23:20.119 18:21:30 ftl.ftl_fio_basic -- ftl/fio.sh@79 -- # timing_enter randw-verify-j2 00:23:20.119 18:21:30 ftl.ftl_fio_basic -- common/autotest_common.sh@726 -- # xtrace_disable 00:23:20.119 18:21:30 ftl.ftl_fio_basic -- common/autotest_common.sh@10 -- # set +x 00:23:20.119 18:21:30 ftl.ftl_fio_basic -- ftl/fio.sh@80 -- # fio_bdev /home/vagrant/spdk_repo/spdk/test/ftl/config/fio/randw-verify-j2.fio 00:23:20.119 18:21:30 ftl.ftl_fio_basic -- common/autotest_common.sh@1360 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev /home/vagrant/spdk_repo/spdk/test/ftl/config/fio/randw-verify-j2.fio 00:23:20.119 18:21:30 ftl.ftl_fio_basic -- common/autotest_common.sh@1341 -- # local fio_dir=/usr/src/fio 00:23:20.120 18:21:30 ftl.ftl_fio_basic -- common/autotest_common.sh@1343 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:23:20.120 18:21:30 ftl.ftl_fio_basic -- common/autotest_common.sh@1343 -- # local sanitizers 00:23:20.120 18:21:30 ftl.ftl_fio_basic -- common/autotest_common.sh@1344 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:23:20.120 18:21:30 ftl.ftl_fio_basic -- common/autotest_common.sh@1345 -- # shift 00:23:20.120 18:21:30 ftl.ftl_fio_basic -- common/autotest_common.sh@1347 -- # local asan_lib= 00:23:20.120 18:21:30 ftl.ftl_fio_basic -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 00:23:20.120 18:21:30 ftl.ftl_fio_basic -- common/autotest_common.sh@1349 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:23:20.120 18:21:30 ftl.ftl_fio_basic -- common/autotest_common.sh@1349 -- # grep libasan 00:23:20.120 18:21:30 ftl.ftl_fio_basic -- common/autotest_common.sh@1349 -- # awk '{print $3}' 00:23:20.120 18:21:30 ftl.ftl_fio_basic -- common/autotest_common.sh@1349 -- # asan_lib=/usr/lib64/libasan.so.8 00:23:20.120 18:21:30 ftl.ftl_fio_basic -- common/autotest_common.sh@1350 -- # [[ -n /usr/lib64/libasan.so.8 ]] 00:23:20.120 18:21:30 ftl.ftl_fio_basic -- common/autotest_common.sh@1351 -- # break 00:23:20.120 18:21:30 ftl.ftl_fio_basic -- common/autotest_common.sh@1356 -- # LD_PRELOAD='/usr/lib64/libasan.so.8 /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev' 00:23:20.120 18:21:30 ftl.ftl_fio_basic -- common/autotest_common.sh@1356 -- # /usr/src/fio/fio /home/vagrant/spdk_repo/spdk/test/ftl/config/fio/randw-verify-j2.fio 00:23:20.379 first_half: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=128 00:23:20.379 second_half: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=128 00:23:20.379 fio-3.35 00:23:20.379 Starting 2 threads 00:23:52.468 00:23:52.468 first_half: (groupid=0, jobs=1): err= 0: pid=77198: Fri Dec 6 18:21:57 2024 00:23:52.468 read: IOPS=2579, BW=10.1MiB/s (10.6MB/s)(255MiB/25271msec) 00:23:52.468 slat (nsec): min=3533, max=87527, avg=7790.90, stdev=3808.27 00:23:52.468 clat (usec): min=909, max=272138, avg=37633.83, stdev=17377.94 00:23:52.468 lat (usec): min=917, max=272142, avg=37641.62, stdev=17378.34 00:23:52.468 clat percentiles (msec): 00:23:52.468 | 1.00th=[ 8], 5.00th=[ 31], 10.00th=[ 32], 20.00th=[ 33], 00:23:52.468 | 30.00th=[ 34], 40.00th=[ 34], 50.00th=[ 37], 60.00th=[ 37], 00:23:52.468 | 70.00th=[ 38], 80.00th=[ 39], 90.00th=[ 39], 95.00th=[ 45], 00:23:52.468 | 99.00th=[ 146], 99.50th=[ 163], 99.90th=[ 186], 99.95th=[ 220], 00:23:52.468 | 99.99th=[ 264] 00:23:52.468 write: IOPS=4126, BW=16.1MiB/s (16.9MB/s)(256MiB/15881msec); 0 zone resets 00:23:52.468 slat (usec): min=4, max=710, avg= 7.73, stdev= 5.93 00:23:52.468 clat (usec): min=381, max=103133, avg=11900.41, stdev=21411.37 00:23:52.468 lat (usec): min=391, max=103141, avg=11908.15, stdev=21411.46 00:23:52.468 clat percentiles (usec): 00:23:52.468 | 1.00th=[ 996], 5.00th=[ 1254], 10.00th=[ 1467], 20.00th=[ 1729], 00:23:52.468 | 30.00th=[ 1975], 40.00th=[ 2409], 50.00th=[ 4359], 60.00th=[ 5735], 00:23:52.468 | 70.00th=[ 7308], 80.00th=[ 11469], 90.00th=[ 38011], 95.00th=[ 73925], 00:23:52.468 | 99.00th=[ 88605], 99.50th=[ 90702], 99.90th=[ 98042], 99.95th=[100140], 00:23:52.468 | 99.99th=[101188] 00:23:52.468 bw ( KiB/s): min= 640, max=41576, per=100.00%, avg=24956.71, stdev=11791.53, samples=21 00:23:52.468 iops : min= 160, max=10394, avg=6239.14, stdev=2947.90, samples=21 00:23:52.468 lat (usec) : 500=0.01%, 750=0.04%, 1000=0.48% 00:23:52.468 lat (msec) : 2=15.19%, 4=9.16%, 10=13.63%, 20=7.10%, 50=47.66% 00:23:52.468 lat (msec) : 100=5.71%, 250=1.01%, 500=0.01% 00:23:52.468 cpu : usr=99.22%, sys=0.18%, ctx=41, majf=0, minf=5581 00:23:52.468 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.1%, >=64=99.9% 00:23:52.468 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:23:52.468 complete : 0=0.0%, 4=100.0%, 8=0.1%, 16=0.1%, 32=0.0%, 64=0.0%, >=64=0.1% 00:23:52.468 issued rwts: total=65185,65536,0,0 short=0,0,0,0 dropped=0,0,0,0 00:23:52.468 latency : target=0, window=0, percentile=100.00%, depth=128 00:23:52.468 second_half: (groupid=0, jobs=1): err= 0: pid=77199: Fri Dec 6 18:21:57 2024 00:23:52.468 read: IOPS=2567, BW=10.0MiB/s (10.5MB/s)(255MiB/25418msec) 00:23:52.468 slat (nsec): min=3537, max=50324, avg=6949.65, stdev=3045.21 00:23:52.468 clat (usec): min=947, max=286511, avg=36675.47, stdev=16893.88 00:23:52.468 lat (usec): min=955, max=286519, avg=36682.42, stdev=16894.08 00:23:52.468 clat percentiles (msec): 00:23:52.468 | 1.00th=[ 8], 5.00th=[ 28], 10.00th=[ 32], 20.00th=[ 33], 00:23:52.468 | 30.00th=[ 34], 40.00th=[ 34], 50.00th=[ 36], 60.00th=[ 37], 00:23:52.468 | 70.00th=[ 38], 80.00th=[ 39], 90.00th=[ 39], 95.00th=[ 42], 00:23:52.468 | 99.00th=[ 128], 99.50th=[ 157], 99.90th=[ 199], 99.95th=[ 253], 00:23:52.468 | 99.99th=[ 279] 00:23:52.468 write: IOPS=3069, BW=12.0MiB/s (12.6MB/s)(256MiB/21354msec); 0 zone resets 00:23:52.468 slat (usec): min=4, max=317, avg= 8.21, stdev= 4.37 00:23:52.468 clat (usec): min=433, max=103813, avg=13096.74, stdev=21855.34 00:23:52.468 lat (usec): min=450, max=103819, avg=13104.95, stdev=21855.52 00:23:52.468 clat percentiles (usec): 00:23:52.468 | 1.00th=[ 914], 5.00th=[ 1172], 10.00th=[ 1385], 20.00th=[ 1713], 00:23:52.468 | 30.00th=[ 2311], 40.00th=[ 4555], 50.00th=[ 5669], 60.00th=[ 6587], 00:23:52.468 | 70.00th=[ 8160], 80.00th=[ 11994], 90.00th=[ 37487], 95.00th=[ 74974], 00:23:52.468 | 99.00th=[ 89654], 99.50th=[ 92799], 99.90th=[ 99091], 99.95th=[101188], 00:23:52.468 | 99.99th=[102237] 00:23:52.468 bw ( KiB/s): min= 296, max=41168, per=82.11%, avg=20159.69, stdev=11947.94, samples=26 00:23:52.468 iops : min= 74, max=10292, avg=5039.88, stdev=2986.97, samples=26 00:23:52.468 lat (usec) : 500=0.01%, 750=0.12%, 1000=0.92% 00:23:52.468 lat (msec) : 2=12.34%, 4=5.93%, 10=19.51%, 20=6.71%, 50=47.83% 00:23:52.468 lat (msec) : 100=5.78%, 250=0.83%, 500=0.03% 00:23:52.468 cpu : usr=99.32%, sys=0.10%, ctx=34, majf=0, minf=5526 00:23:52.468 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.1%, >=64=99.9% 00:23:52.468 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:23:52.468 complete : 0=0.0%, 4=99.9%, 8=0.1%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:23:52.468 issued rwts: total=65263,65536,0,0 short=0,0,0,0 dropped=0,0,0,0 00:23:52.468 latency : target=0, window=0, percentile=100.00%, depth=128 00:23:52.468 00:23:52.468 Run status group 0 (all jobs): 00:23:52.468 READ: bw=20.0MiB/s (21.0MB/s), 10.0MiB/s-10.1MiB/s (10.5MB/s-10.6MB/s), io=510MiB (534MB), run=25271-25418msec 00:23:52.468 WRITE: bw=24.0MiB/s (25.1MB/s), 12.0MiB/s-16.1MiB/s (12.6MB/s-16.9MB/s), io=512MiB (537MB), run=15881-21354msec 00:23:52.468 ----------------------------------------------------- 00:23:52.468 Suppressions used: 00:23:52.468 count bytes template 00:23:52.468 2 10 /usr/src/fio/parse.c 00:23:52.468 5 480 /usr/src/fio/iolog.c 00:23:52.468 1 8 libtcmalloc_minimal.so 00:23:52.468 1 904 libcrypto.so 00:23:52.468 ----------------------------------------------------- 00:23:52.468 00:23:52.468 18:22:00 ftl.ftl_fio_basic -- ftl/fio.sh@81 -- # timing_exit randw-verify-j2 00:23:52.468 18:22:00 ftl.ftl_fio_basic -- common/autotest_common.sh@732 -- # xtrace_disable 00:23:52.468 18:22:00 ftl.ftl_fio_basic -- common/autotest_common.sh@10 -- # set +x 00:23:52.468 18:22:00 ftl.ftl_fio_basic -- ftl/fio.sh@78 -- # for test in ${tests} 00:23:52.468 18:22:00 ftl.ftl_fio_basic -- ftl/fio.sh@79 -- # timing_enter randw-verify-depth128 00:23:52.468 18:22:00 ftl.ftl_fio_basic -- common/autotest_common.sh@726 -- # xtrace_disable 00:23:52.468 18:22:00 ftl.ftl_fio_basic -- common/autotest_common.sh@10 -- # set +x 00:23:52.468 18:22:00 ftl.ftl_fio_basic -- ftl/fio.sh@80 -- # fio_bdev /home/vagrant/spdk_repo/spdk/test/ftl/config/fio/randw-verify-depth128.fio 00:23:52.468 18:22:00 ftl.ftl_fio_basic -- common/autotest_common.sh@1360 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev /home/vagrant/spdk_repo/spdk/test/ftl/config/fio/randw-verify-depth128.fio 00:23:52.468 18:22:00 ftl.ftl_fio_basic -- common/autotest_common.sh@1341 -- # local fio_dir=/usr/src/fio 00:23:52.468 18:22:00 ftl.ftl_fio_basic -- common/autotest_common.sh@1343 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:23:52.468 18:22:00 ftl.ftl_fio_basic -- common/autotest_common.sh@1343 -- # local sanitizers 00:23:52.468 18:22:00 ftl.ftl_fio_basic -- common/autotest_common.sh@1344 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:23:52.468 18:22:00 ftl.ftl_fio_basic -- common/autotest_common.sh@1345 -- # shift 00:23:52.468 18:22:00 ftl.ftl_fio_basic -- common/autotest_common.sh@1347 -- # local asan_lib= 00:23:52.468 18:22:00 ftl.ftl_fio_basic -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 00:23:52.468 18:22:00 ftl.ftl_fio_basic -- common/autotest_common.sh@1349 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:23:52.468 18:22:00 ftl.ftl_fio_basic -- common/autotest_common.sh@1349 -- # grep libasan 00:23:52.468 18:22:00 ftl.ftl_fio_basic -- common/autotest_common.sh@1349 -- # awk '{print $3}' 00:23:52.468 18:22:00 ftl.ftl_fio_basic -- common/autotest_common.sh@1349 -- # asan_lib=/usr/lib64/libasan.so.8 00:23:52.468 18:22:00 ftl.ftl_fio_basic -- common/autotest_common.sh@1350 -- # [[ -n /usr/lib64/libasan.so.8 ]] 00:23:52.468 18:22:00 ftl.ftl_fio_basic -- common/autotest_common.sh@1351 -- # break 00:23:52.468 18:22:00 ftl.ftl_fio_basic -- common/autotest_common.sh@1356 -- # LD_PRELOAD='/usr/lib64/libasan.so.8 /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev' 00:23:52.468 18:22:00 ftl.ftl_fio_basic -- common/autotest_common.sh@1356 -- # /usr/src/fio/fio /home/vagrant/spdk_repo/spdk/test/ftl/config/fio/randw-verify-depth128.fio 00:23:52.468 test: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=128 00:23:52.468 fio-3.35 00:23:52.468 Starting 1 thread 00:24:04.679 00:24:04.679 test: (groupid=0, jobs=1): err= 0: pid=77532: Fri Dec 6 18:22:14 2024 00:24:04.679 read: IOPS=8125, BW=31.7MiB/s (33.3MB/s)(255MiB/8024msec) 00:24:04.679 slat (nsec): min=3405, max=80044, avg=5157.49, stdev=1521.24 00:24:04.679 clat (usec): min=597, max=30535, avg=15742.85, stdev=726.12 00:24:04.679 lat (usec): min=602, max=30539, avg=15748.01, stdev=726.10 00:24:04.679 clat percentiles (usec): 00:24:04.679 | 1.00th=[14877], 5.00th=[15008], 10.00th=[15139], 20.00th=[15401], 00:24:04.679 | 30.00th=[15533], 40.00th=[15664], 50.00th=[15664], 60.00th=[15795], 00:24:04.679 | 70.00th=[15926], 80.00th=[16057], 90.00th=[16188], 95.00th=[16450], 00:24:04.679 | 99.00th=[17695], 99.50th=[17957], 99.90th=[22414], 99.95th=[26346], 00:24:04.679 | 99.99th=[29754] 00:24:04.679 write: IOPS=14.5k, BW=56.7MiB/s (59.5MB/s)(256MiB/4514msec); 0 zone resets 00:24:04.679 slat (usec): min=4, max=595, avg= 7.49, stdev= 5.64 00:24:04.679 clat (usec): min=561, max=51404, avg=8771.47, stdev=10747.90 00:24:04.679 lat (usec): min=567, max=51411, avg=8778.96, stdev=10747.92 00:24:04.679 clat percentiles (usec): 00:24:04.679 | 1.00th=[ 922], 5.00th=[ 1090], 10.00th=[ 1237], 20.00th=[ 1385], 00:24:04.679 | 30.00th=[ 1549], 40.00th=[ 1844], 50.00th=[ 5800], 60.00th=[ 6652], 00:24:04.679 | 70.00th=[ 7635], 80.00th=[ 9241], 90.00th=[32113], 95.00th=[33424], 00:24:04.679 | 99.00th=[35390], 99.50th=[35914], 99.90th=[38536], 99.95th=[41157], 00:24:04.679 | 99.99th=[46924] 00:24:04.679 bw ( KiB/s): min= 1016, max=81944, per=90.26%, avg=52416.30, stdev=20992.43, samples=10 00:24:04.680 iops : min= 254, max=20486, avg=13104.00, stdev=5248.07, samples=10 00:24:04.680 lat (usec) : 750=0.03%, 1000=1.30% 00:24:04.680 lat (msec) : 2=19.23%, 4=0.60%, 10=20.22%, 20=50.56%, 50=8.05% 00:24:04.680 lat (msec) : 100=0.01% 00:24:04.680 cpu : usr=98.80%, sys=0.46%, ctx=23, majf=0, minf=5565 00:24:04.680 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.1%, >=64=99.8% 00:24:04.680 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:24:04.680 complete : 0=0.0%, 4=100.0%, 8=0.1%, 16=0.1%, 32=0.0%, 64=0.0%, >=64=0.1% 00:24:04.680 issued rwts: total=65202,65536,0,0 short=0,0,0,0 dropped=0,0,0,0 00:24:04.680 latency : target=0, window=0, percentile=100.00%, depth=128 00:24:04.680 00:24:04.680 Run status group 0 (all jobs): 00:24:04.680 READ: bw=31.7MiB/s (33.3MB/s), 31.7MiB/s-31.7MiB/s (33.3MB/s-33.3MB/s), io=255MiB (267MB), run=8024-8024msec 00:24:04.680 WRITE: bw=56.7MiB/s (59.5MB/s), 56.7MiB/s-56.7MiB/s (59.5MB/s-59.5MB/s), io=256MiB (268MB), run=4514-4514msec 00:24:06.057 ----------------------------------------------------- 00:24:06.057 Suppressions used: 00:24:06.057 count bytes template 00:24:06.057 1 5 /usr/src/fio/parse.c 00:24:06.057 2 192 /usr/src/fio/iolog.c 00:24:06.057 1 8 libtcmalloc_minimal.so 00:24:06.057 1 904 libcrypto.so 00:24:06.057 ----------------------------------------------------- 00:24:06.057 00:24:06.057 18:22:16 ftl.ftl_fio_basic -- ftl/fio.sh@81 -- # timing_exit randw-verify-depth128 00:24:06.057 18:22:16 ftl.ftl_fio_basic -- common/autotest_common.sh@732 -- # xtrace_disable 00:24:06.057 18:22:16 ftl.ftl_fio_basic -- common/autotest_common.sh@10 -- # set +x 00:24:06.057 18:22:16 ftl.ftl_fio_basic -- ftl/fio.sh@84 -- # rm -f /home/vagrant/spdk_repo/spdk/test/ftl/config/ftl.json 00:24:06.057 Remove shared memory files 00:24:06.057 18:22:16 ftl.ftl_fio_basic -- ftl/fio.sh@85 -- # remove_shm 00:24:06.057 18:22:16 ftl.ftl_fio_basic -- ftl/common.sh@204 -- # echo Remove shared memory files 00:24:06.057 18:22:16 ftl.ftl_fio_basic -- ftl/common.sh@205 -- # rm -f rm -f 00:24:06.057 18:22:16 ftl.ftl_fio_basic -- ftl/common.sh@206 -- # rm -f rm -f 00:24:06.057 18:22:16 ftl.ftl_fio_basic -- ftl/common.sh@207 -- # rm -f rm -f /dev/shm/spdk_tgt_trace.pid57798 /dev/shm/spdk_tgt_trace.pid75774 00:24:06.057 18:22:16 ftl.ftl_fio_basic -- ftl/common.sh@208 -- # rm -f rm -f /dev/shm/iscsi 00:24:06.057 18:22:16 ftl.ftl_fio_basic -- ftl/common.sh@209 -- # rm -f rm -f 00:24:06.057 ************************************ 00:24:06.057 END TEST ftl_fio_basic 00:24:06.057 ************************************ 00:24:06.057 00:24:06.057 real 1m8.133s 00:24:06.057 user 2m27.560s 00:24:06.057 sys 0m3.906s 00:24:06.057 18:22:16 ftl.ftl_fio_basic -- common/autotest_common.sh@1130 -- # xtrace_disable 00:24:06.057 18:22:16 ftl.ftl_fio_basic -- common/autotest_common.sh@10 -- # set +x 00:24:06.057 18:22:16 ftl -- ftl/ftl.sh@74 -- # run_test ftl_bdevperf /home/vagrant/spdk_repo/spdk/test/ftl/bdevperf.sh 0000:00:11.0 0000:00:10.0 00:24:06.057 18:22:16 ftl -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:24:06.057 18:22:16 ftl -- common/autotest_common.sh@1111 -- # xtrace_disable 00:24:06.057 18:22:16 ftl -- common/autotest_common.sh@10 -- # set +x 00:24:06.057 ************************************ 00:24:06.057 START TEST ftl_bdevperf 00:24:06.057 ************************************ 00:24:06.057 18:22:16 ftl.ftl_bdevperf -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/ftl/bdevperf.sh 0000:00:11.0 0000:00:10.0 00:24:06.317 * Looking for test storage... 00:24:06.317 * Found test storage at /home/vagrant/spdk_repo/spdk/test/ftl 00:24:06.317 18:22:16 ftl.ftl_bdevperf -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:24:06.317 18:22:16 ftl.ftl_bdevperf -- common/autotest_common.sh@1711 -- # lcov --version 00:24:06.317 18:22:16 ftl.ftl_bdevperf -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:24:06.317 18:22:16 ftl.ftl_bdevperf -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:24:06.317 18:22:16 ftl.ftl_bdevperf -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:24:06.317 18:22:16 ftl.ftl_bdevperf -- scripts/common.sh@333 -- # local ver1 ver1_l 00:24:06.317 18:22:16 ftl.ftl_bdevperf -- scripts/common.sh@334 -- # local ver2 ver2_l 00:24:06.317 18:22:16 ftl.ftl_bdevperf -- scripts/common.sh@336 -- # IFS=.-: 00:24:06.317 18:22:16 ftl.ftl_bdevperf -- scripts/common.sh@336 -- # read -ra ver1 00:24:06.317 18:22:16 ftl.ftl_bdevperf -- scripts/common.sh@337 -- # IFS=.-: 00:24:06.317 18:22:16 ftl.ftl_bdevperf -- scripts/common.sh@337 -- # read -ra ver2 00:24:06.317 18:22:16 ftl.ftl_bdevperf -- scripts/common.sh@338 -- # local 'op=<' 00:24:06.317 18:22:16 ftl.ftl_bdevperf -- scripts/common.sh@340 -- # ver1_l=2 00:24:06.317 18:22:16 ftl.ftl_bdevperf -- scripts/common.sh@341 -- # ver2_l=1 00:24:06.317 18:22:16 ftl.ftl_bdevperf -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:24:06.317 18:22:16 ftl.ftl_bdevperf -- scripts/common.sh@344 -- # case "$op" in 00:24:06.317 18:22:16 ftl.ftl_bdevperf -- scripts/common.sh@345 -- # : 1 00:24:06.317 18:22:16 ftl.ftl_bdevperf -- scripts/common.sh@364 -- # (( v = 0 )) 00:24:06.317 18:22:16 ftl.ftl_bdevperf -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:24:06.317 18:22:16 ftl.ftl_bdevperf -- scripts/common.sh@365 -- # decimal 1 00:24:06.317 18:22:16 ftl.ftl_bdevperf -- scripts/common.sh@353 -- # local d=1 00:24:06.317 18:22:16 ftl.ftl_bdevperf -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:24:06.317 18:22:16 ftl.ftl_bdevperf -- scripts/common.sh@355 -- # echo 1 00:24:06.317 18:22:16 ftl.ftl_bdevperf -- scripts/common.sh@365 -- # ver1[v]=1 00:24:06.317 18:22:16 ftl.ftl_bdevperf -- scripts/common.sh@366 -- # decimal 2 00:24:06.317 18:22:16 ftl.ftl_bdevperf -- scripts/common.sh@353 -- # local d=2 00:24:06.317 18:22:16 ftl.ftl_bdevperf -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:24:06.317 18:22:16 ftl.ftl_bdevperf -- scripts/common.sh@355 -- # echo 2 00:24:06.317 18:22:16 ftl.ftl_bdevperf -- scripts/common.sh@366 -- # ver2[v]=2 00:24:06.317 18:22:16 ftl.ftl_bdevperf -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:24:06.317 18:22:16 ftl.ftl_bdevperf -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:24:06.317 18:22:16 ftl.ftl_bdevperf -- scripts/common.sh@368 -- # return 0 00:24:06.317 18:22:16 ftl.ftl_bdevperf -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:24:06.317 18:22:16 ftl.ftl_bdevperf -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:24:06.317 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:24:06.317 --rc genhtml_branch_coverage=1 00:24:06.317 --rc genhtml_function_coverage=1 00:24:06.317 --rc genhtml_legend=1 00:24:06.317 --rc geninfo_all_blocks=1 00:24:06.317 --rc geninfo_unexecuted_blocks=1 00:24:06.317 00:24:06.317 ' 00:24:06.317 18:22:16 ftl.ftl_bdevperf -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:24:06.317 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:24:06.317 --rc genhtml_branch_coverage=1 00:24:06.317 --rc genhtml_function_coverage=1 00:24:06.317 --rc genhtml_legend=1 00:24:06.317 --rc geninfo_all_blocks=1 00:24:06.317 --rc geninfo_unexecuted_blocks=1 00:24:06.317 00:24:06.317 ' 00:24:06.317 18:22:16 ftl.ftl_bdevperf -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:24:06.317 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:24:06.317 --rc genhtml_branch_coverage=1 00:24:06.317 --rc genhtml_function_coverage=1 00:24:06.317 --rc genhtml_legend=1 00:24:06.317 --rc geninfo_all_blocks=1 00:24:06.317 --rc geninfo_unexecuted_blocks=1 00:24:06.317 00:24:06.317 ' 00:24:06.317 18:22:16 ftl.ftl_bdevperf -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:24:06.317 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:24:06.317 --rc genhtml_branch_coverage=1 00:24:06.317 --rc genhtml_function_coverage=1 00:24:06.317 --rc genhtml_legend=1 00:24:06.317 --rc geninfo_all_blocks=1 00:24:06.317 --rc geninfo_unexecuted_blocks=1 00:24:06.317 00:24:06.317 ' 00:24:06.317 18:22:16 ftl.ftl_bdevperf -- ftl/bdevperf.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/ftl/common.sh 00:24:06.317 18:22:16 ftl.ftl_bdevperf -- ftl/common.sh@8 -- # dirname /home/vagrant/spdk_repo/spdk/test/ftl/bdevperf.sh 00:24:06.317 18:22:16 ftl.ftl_bdevperf -- ftl/common.sh@8 -- # readlink -f /home/vagrant/spdk_repo/spdk/test/ftl 00:24:06.317 18:22:16 ftl.ftl_bdevperf -- ftl/common.sh@8 -- # testdir=/home/vagrant/spdk_repo/spdk/test/ftl 00:24:06.317 18:22:16 ftl.ftl_bdevperf -- ftl/common.sh@9 -- # readlink -f /home/vagrant/spdk_repo/spdk/test/ftl/../.. 00:24:06.317 18:22:16 ftl.ftl_bdevperf -- ftl/common.sh@9 -- # rootdir=/home/vagrant/spdk_repo/spdk 00:24:06.317 18:22:16 ftl.ftl_bdevperf -- ftl/common.sh@10 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:24:06.317 18:22:16 ftl.ftl_bdevperf -- ftl/common.sh@12 -- # export 'ftl_tgt_core_mask=[0]' 00:24:06.317 18:22:16 ftl.ftl_bdevperf -- ftl/common.sh@12 -- # ftl_tgt_core_mask='[0]' 00:24:06.317 18:22:16 ftl.ftl_bdevperf -- ftl/common.sh@14 -- # export spdk_tgt_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:24:06.317 18:22:16 ftl.ftl_bdevperf -- ftl/common.sh@14 -- # spdk_tgt_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:24:06.317 18:22:16 ftl.ftl_bdevperf -- ftl/common.sh@15 -- # export 'spdk_tgt_cpumask=[0]' 00:24:06.317 18:22:16 ftl.ftl_bdevperf -- ftl/common.sh@15 -- # spdk_tgt_cpumask='[0]' 00:24:06.317 18:22:16 ftl.ftl_bdevperf -- ftl/common.sh@16 -- # export spdk_tgt_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json 00:24:06.317 18:22:16 ftl.ftl_bdevperf -- ftl/common.sh@16 -- # spdk_tgt_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json 00:24:06.317 18:22:16 ftl.ftl_bdevperf -- ftl/common.sh@17 -- # export spdk_tgt_pid= 00:24:06.318 18:22:16 ftl.ftl_bdevperf -- ftl/common.sh@17 -- # spdk_tgt_pid= 00:24:06.318 18:22:16 ftl.ftl_bdevperf -- ftl/common.sh@19 -- # export spdk_ini_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:24:06.318 18:22:16 ftl.ftl_bdevperf -- ftl/common.sh@19 -- # spdk_ini_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:24:06.318 18:22:16 ftl.ftl_bdevperf -- ftl/common.sh@20 -- # export 'spdk_ini_cpumask=[1]' 00:24:06.318 18:22:16 ftl.ftl_bdevperf -- ftl/common.sh@20 -- # spdk_ini_cpumask='[1]' 00:24:06.318 18:22:16 ftl.ftl_bdevperf -- ftl/common.sh@21 -- # export spdk_ini_rpc=/var/tmp/spdk.tgt.sock 00:24:06.318 18:22:16 ftl.ftl_bdevperf -- ftl/common.sh@21 -- # spdk_ini_rpc=/var/tmp/spdk.tgt.sock 00:24:06.318 18:22:16 ftl.ftl_bdevperf -- ftl/common.sh@22 -- # export spdk_ini_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json 00:24:06.318 18:22:16 ftl.ftl_bdevperf -- ftl/common.sh@22 -- # spdk_ini_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json 00:24:06.318 18:22:16 ftl.ftl_bdevperf -- ftl/common.sh@23 -- # export spdk_ini_pid= 00:24:06.318 18:22:16 ftl.ftl_bdevperf -- ftl/common.sh@23 -- # spdk_ini_pid= 00:24:06.318 18:22:16 ftl.ftl_bdevperf -- ftl/common.sh@25 -- # export spdk_dd_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:24:06.318 18:22:16 ftl.ftl_bdevperf -- ftl/common.sh@25 -- # spdk_dd_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:24:06.318 18:22:16 ftl.ftl_bdevperf -- ftl/bdevperf.sh@11 -- # device=0000:00:11.0 00:24:06.318 18:22:16 ftl.ftl_bdevperf -- ftl/bdevperf.sh@12 -- # cache_device=0000:00:10.0 00:24:06.318 18:22:16 ftl.ftl_bdevperf -- ftl/bdevperf.sh@13 -- # use_append= 00:24:06.318 18:22:16 ftl.ftl_bdevperf -- ftl/bdevperf.sh@14 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:24:06.318 18:22:16 ftl.ftl_bdevperf -- ftl/bdevperf.sh@15 -- # timeout=240 00:24:06.318 18:22:16 ftl.ftl_bdevperf -- ftl/bdevperf.sh@18 -- # bdevperf_pid=77768 00:24:06.318 18:22:16 ftl.ftl_bdevperf -- ftl/bdevperf.sh@17 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -z -T ftl0 00:24:06.318 18:22:16 ftl.ftl_bdevperf -- ftl/bdevperf.sh@20 -- # trap 'killprocess $bdevperf_pid; exit 1' SIGINT SIGTERM EXIT 00:24:06.318 18:22:16 ftl.ftl_bdevperf -- ftl/bdevperf.sh@21 -- # waitforlisten 77768 00:24:06.318 18:22:16 ftl.ftl_bdevperf -- common/autotest_common.sh@835 -- # '[' -z 77768 ']' 00:24:06.318 18:22:16 ftl.ftl_bdevperf -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:24:06.318 18:22:16 ftl.ftl_bdevperf -- common/autotest_common.sh@840 -- # local max_retries=100 00:24:06.318 18:22:16 ftl.ftl_bdevperf -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:24:06.318 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:24:06.318 18:22:16 ftl.ftl_bdevperf -- common/autotest_common.sh@844 -- # xtrace_disable 00:24:06.318 18:22:16 ftl.ftl_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:24:06.577 [2024-12-06 18:22:16.898648] Starting SPDK v25.01-pre git sha1 1148849d6 / DPDK 24.03.0 initialization... 00:24:06.577 [2024-12-06 18:22:16.898957] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid77768 ] 00:24:06.577 [2024-12-06 18:22:17.077729] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:24:06.837 [2024-12-06 18:22:17.190672] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:24:07.404 18:22:17 ftl.ftl_bdevperf -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:24:07.404 18:22:17 ftl.ftl_bdevperf -- common/autotest_common.sh@868 -- # return 0 00:24:07.404 18:22:17 ftl.ftl_bdevperf -- ftl/bdevperf.sh@22 -- # create_base_bdev nvme0 0000:00:11.0 103424 00:24:07.404 18:22:17 ftl.ftl_bdevperf -- ftl/common.sh@54 -- # local name=nvme0 00:24:07.404 18:22:17 ftl.ftl_bdevperf -- ftl/common.sh@55 -- # local base_bdf=0000:00:11.0 00:24:07.404 18:22:17 ftl.ftl_bdevperf -- ftl/common.sh@56 -- # local size=103424 00:24:07.404 18:22:17 ftl.ftl_bdevperf -- ftl/common.sh@59 -- # local base_bdev 00:24:07.404 18:22:17 ftl.ftl_bdevperf -- ftl/common.sh@60 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_nvme_attach_controller -b nvme0 -t PCIe -a 0000:00:11.0 00:24:07.663 18:22:18 ftl.ftl_bdevperf -- ftl/common.sh@60 -- # base_bdev=nvme0n1 00:24:07.663 18:22:18 ftl.ftl_bdevperf -- ftl/common.sh@62 -- # local base_size 00:24:07.663 18:22:18 ftl.ftl_bdevperf -- ftl/common.sh@63 -- # get_bdev_size nvme0n1 00:24:07.663 18:22:18 ftl.ftl_bdevperf -- common/autotest_common.sh@1382 -- # local bdev_name=nvme0n1 00:24:07.663 18:22:18 ftl.ftl_bdevperf -- common/autotest_common.sh@1383 -- # local bdev_info 00:24:07.663 18:22:18 ftl.ftl_bdevperf -- common/autotest_common.sh@1384 -- # local bs 00:24:07.663 18:22:18 ftl.ftl_bdevperf -- common/autotest_common.sh@1385 -- # local nb 00:24:07.663 18:22:18 ftl.ftl_bdevperf -- common/autotest_common.sh@1386 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b nvme0n1 00:24:07.923 18:22:18 ftl.ftl_bdevperf -- common/autotest_common.sh@1386 -- # bdev_info='[ 00:24:07.923 { 00:24:07.923 "name": "nvme0n1", 00:24:07.923 "aliases": [ 00:24:07.923 "85cb6d9f-7b0b-4f04-b861-a02f80644d7b" 00:24:07.923 ], 00:24:07.923 "product_name": "NVMe disk", 00:24:07.923 "block_size": 4096, 00:24:07.923 "num_blocks": 1310720, 00:24:07.923 "uuid": "85cb6d9f-7b0b-4f04-b861-a02f80644d7b", 00:24:07.923 "numa_id": -1, 00:24:07.923 "assigned_rate_limits": { 00:24:07.923 "rw_ios_per_sec": 0, 00:24:07.923 "rw_mbytes_per_sec": 0, 00:24:07.923 "r_mbytes_per_sec": 0, 00:24:07.923 "w_mbytes_per_sec": 0 00:24:07.923 }, 00:24:07.923 "claimed": true, 00:24:07.923 "claim_type": "read_many_write_one", 00:24:07.923 "zoned": false, 00:24:07.923 "supported_io_types": { 00:24:07.923 "read": true, 00:24:07.923 "write": true, 00:24:07.923 "unmap": true, 00:24:07.923 "flush": true, 00:24:07.923 "reset": true, 00:24:07.923 "nvme_admin": true, 00:24:07.923 "nvme_io": true, 00:24:07.923 "nvme_io_md": false, 00:24:07.923 "write_zeroes": true, 00:24:07.923 "zcopy": false, 00:24:07.923 "get_zone_info": false, 00:24:07.923 "zone_management": false, 00:24:07.923 "zone_append": false, 00:24:07.923 "compare": true, 00:24:07.923 "compare_and_write": false, 00:24:07.923 "abort": true, 00:24:07.923 "seek_hole": false, 00:24:07.923 "seek_data": false, 00:24:07.923 "copy": true, 00:24:07.923 "nvme_iov_md": false 00:24:07.923 }, 00:24:07.923 "driver_specific": { 00:24:07.923 "nvme": [ 00:24:07.923 { 00:24:07.923 "pci_address": "0000:00:11.0", 00:24:07.923 "trid": { 00:24:07.923 "trtype": "PCIe", 00:24:07.923 "traddr": "0000:00:11.0" 00:24:07.923 }, 00:24:07.923 "ctrlr_data": { 00:24:07.923 "cntlid": 0, 00:24:07.923 "vendor_id": "0x1b36", 00:24:07.923 "model_number": "QEMU NVMe Ctrl", 00:24:07.923 "serial_number": "12341", 00:24:07.923 "firmware_revision": "8.0.0", 00:24:07.923 "subnqn": "nqn.2019-08.org.qemu:12341", 00:24:07.923 "oacs": { 00:24:07.923 "security": 0, 00:24:07.923 "format": 1, 00:24:07.923 "firmware": 0, 00:24:07.923 "ns_manage": 1 00:24:07.923 }, 00:24:07.923 "multi_ctrlr": false, 00:24:07.923 "ana_reporting": false 00:24:07.923 }, 00:24:07.923 "vs": { 00:24:07.923 "nvme_version": "1.4" 00:24:07.923 }, 00:24:07.923 "ns_data": { 00:24:07.923 "id": 1, 00:24:07.923 "can_share": false 00:24:07.923 } 00:24:07.923 } 00:24:07.923 ], 00:24:07.923 "mp_policy": "active_passive" 00:24:07.923 } 00:24:07.923 } 00:24:07.923 ]' 00:24:07.923 18:22:18 ftl.ftl_bdevperf -- common/autotest_common.sh@1387 -- # jq '.[] .block_size' 00:24:07.923 18:22:18 ftl.ftl_bdevperf -- common/autotest_common.sh@1387 -- # bs=4096 00:24:07.923 18:22:18 ftl.ftl_bdevperf -- common/autotest_common.sh@1388 -- # jq '.[] .num_blocks' 00:24:07.923 18:22:18 ftl.ftl_bdevperf -- common/autotest_common.sh@1388 -- # nb=1310720 00:24:07.924 18:22:18 ftl.ftl_bdevperf -- common/autotest_common.sh@1391 -- # bdev_size=5120 00:24:07.924 18:22:18 ftl.ftl_bdevperf -- common/autotest_common.sh@1392 -- # echo 5120 00:24:07.924 18:22:18 ftl.ftl_bdevperf -- ftl/common.sh@63 -- # base_size=5120 00:24:07.924 18:22:18 ftl.ftl_bdevperf -- ftl/common.sh@64 -- # [[ 103424 -le 5120 ]] 00:24:07.924 18:22:18 ftl.ftl_bdevperf -- ftl/common.sh@67 -- # clear_lvols 00:24:07.924 18:22:18 ftl.ftl_bdevperf -- ftl/common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores 00:24:07.924 18:22:18 ftl.ftl_bdevperf -- ftl/common.sh@28 -- # jq -r '.[] | .uuid' 00:24:08.183 18:22:18 ftl.ftl_bdevperf -- ftl/common.sh@28 -- # stores=517306c0-88c4-485d-9ff8-844e0e30e968 00:24:08.183 18:22:18 ftl.ftl_bdevperf -- ftl/common.sh@29 -- # for lvs in $stores 00:24:08.183 18:22:18 ftl.ftl_bdevperf -- ftl/common.sh@30 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -u 517306c0-88c4-485d-9ff8-844e0e30e968 00:24:08.443 18:22:18 ftl.ftl_bdevperf -- ftl/common.sh@68 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create_lvstore nvme0n1 lvs 00:24:08.443 18:22:18 ftl.ftl_bdevperf -- ftl/common.sh@68 -- # lvs=2f98123b-bf1a-44e5-a926-f0e26e779530 00:24:08.443 18:22:18 ftl.ftl_bdevperf -- ftl/common.sh@69 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create nvme0n1p0 103424 -t -u 2f98123b-bf1a-44e5-a926-f0e26e779530 00:24:08.702 18:22:19 ftl.ftl_bdevperf -- ftl/bdevperf.sh@22 -- # split_bdev=d82aa41a-7be7-4b35-84db-ab6dc7ea10ab 00:24:08.702 18:22:19 ftl.ftl_bdevperf -- ftl/bdevperf.sh@23 -- # create_nv_cache_bdev nvc0 0000:00:10.0 d82aa41a-7be7-4b35-84db-ab6dc7ea10ab 00:24:08.702 18:22:19 ftl.ftl_bdevperf -- ftl/common.sh@35 -- # local name=nvc0 00:24:08.702 18:22:19 ftl.ftl_bdevperf -- ftl/common.sh@36 -- # local cache_bdf=0000:00:10.0 00:24:08.702 18:22:19 ftl.ftl_bdevperf -- ftl/common.sh@37 -- # local base_bdev=d82aa41a-7be7-4b35-84db-ab6dc7ea10ab 00:24:08.702 18:22:19 ftl.ftl_bdevperf -- ftl/common.sh@38 -- # local cache_size= 00:24:08.702 18:22:19 ftl.ftl_bdevperf -- ftl/common.sh@41 -- # get_bdev_size d82aa41a-7be7-4b35-84db-ab6dc7ea10ab 00:24:08.702 18:22:19 ftl.ftl_bdevperf -- common/autotest_common.sh@1382 -- # local bdev_name=d82aa41a-7be7-4b35-84db-ab6dc7ea10ab 00:24:08.702 18:22:19 ftl.ftl_bdevperf -- common/autotest_common.sh@1383 -- # local bdev_info 00:24:08.702 18:22:19 ftl.ftl_bdevperf -- common/autotest_common.sh@1384 -- # local bs 00:24:08.702 18:22:19 ftl.ftl_bdevperf -- common/autotest_common.sh@1385 -- # local nb 00:24:08.702 18:22:19 ftl.ftl_bdevperf -- common/autotest_common.sh@1386 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b d82aa41a-7be7-4b35-84db-ab6dc7ea10ab 00:24:08.962 18:22:19 ftl.ftl_bdevperf -- common/autotest_common.sh@1386 -- # bdev_info='[ 00:24:08.962 { 00:24:08.962 "name": "d82aa41a-7be7-4b35-84db-ab6dc7ea10ab", 00:24:08.962 "aliases": [ 00:24:08.962 "lvs/nvme0n1p0" 00:24:08.962 ], 00:24:08.962 "product_name": "Logical Volume", 00:24:08.962 "block_size": 4096, 00:24:08.962 "num_blocks": 26476544, 00:24:08.962 "uuid": "d82aa41a-7be7-4b35-84db-ab6dc7ea10ab", 00:24:08.962 "assigned_rate_limits": { 00:24:08.962 "rw_ios_per_sec": 0, 00:24:08.962 "rw_mbytes_per_sec": 0, 00:24:08.962 "r_mbytes_per_sec": 0, 00:24:08.962 "w_mbytes_per_sec": 0 00:24:08.962 }, 00:24:08.962 "claimed": false, 00:24:08.962 "zoned": false, 00:24:08.962 "supported_io_types": { 00:24:08.962 "read": true, 00:24:08.962 "write": true, 00:24:08.962 "unmap": true, 00:24:08.962 "flush": false, 00:24:08.962 "reset": true, 00:24:08.962 "nvme_admin": false, 00:24:08.962 "nvme_io": false, 00:24:08.962 "nvme_io_md": false, 00:24:08.962 "write_zeroes": true, 00:24:08.962 "zcopy": false, 00:24:08.962 "get_zone_info": false, 00:24:08.962 "zone_management": false, 00:24:08.962 "zone_append": false, 00:24:08.962 "compare": false, 00:24:08.962 "compare_and_write": false, 00:24:08.962 "abort": false, 00:24:08.962 "seek_hole": true, 00:24:08.962 "seek_data": true, 00:24:08.962 "copy": false, 00:24:08.962 "nvme_iov_md": false 00:24:08.962 }, 00:24:08.962 "driver_specific": { 00:24:08.962 "lvol": { 00:24:08.962 "lvol_store_uuid": "2f98123b-bf1a-44e5-a926-f0e26e779530", 00:24:08.962 "base_bdev": "nvme0n1", 00:24:08.962 "thin_provision": true, 00:24:08.962 "num_allocated_clusters": 0, 00:24:08.962 "snapshot": false, 00:24:08.962 "clone": false, 00:24:08.962 "esnap_clone": false 00:24:08.962 } 00:24:08.962 } 00:24:08.962 } 00:24:08.962 ]' 00:24:08.962 18:22:19 ftl.ftl_bdevperf -- common/autotest_common.sh@1387 -- # jq '.[] .block_size' 00:24:08.962 18:22:19 ftl.ftl_bdevperf -- common/autotest_common.sh@1387 -- # bs=4096 00:24:08.962 18:22:19 ftl.ftl_bdevperf -- common/autotest_common.sh@1388 -- # jq '.[] .num_blocks' 00:24:08.962 18:22:19 ftl.ftl_bdevperf -- common/autotest_common.sh@1388 -- # nb=26476544 00:24:08.962 18:22:19 ftl.ftl_bdevperf -- common/autotest_common.sh@1391 -- # bdev_size=103424 00:24:08.962 18:22:19 ftl.ftl_bdevperf -- common/autotest_common.sh@1392 -- # echo 103424 00:24:08.962 18:22:19 ftl.ftl_bdevperf -- ftl/common.sh@41 -- # local base_size=5171 00:24:08.962 18:22:19 ftl.ftl_bdevperf -- ftl/common.sh@44 -- # local nvc_bdev 00:24:08.962 18:22:19 ftl.ftl_bdevperf -- ftl/common.sh@45 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_nvme_attach_controller -b nvc0 -t PCIe -a 0000:00:10.0 00:24:09.532 18:22:19 ftl.ftl_bdevperf -- ftl/common.sh@45 -- # nvc_bdev=nvc0n1 00:24:09.532 18:22:19 ftl.ftl_bdevperf -- ftl/common.sh@47 -- # [[ -z '' ]] 00:24:09.532 18:22:19 ftl.ftl_bdevperf -- ftl/common.sh@48 -- # get_bdev_size d82aa41a-7be7-4b35-84db-ab6dc7ea10ab 00:24:09.532 18:22:19 ftl.ftl_bdevperf -- common/autotest_common.sh@1382 -- # local bdev_name=d82aa41a-7be7-4b35-84db-ab6dc7ea10ab 00:24:09.532 18:22:19 ftl.ftl_bdevperf -- common/autotest_common.sh@1383 -- # local bdev_info 00:24:09.532 18:22:19 ftl.ftl_bdevperf -- common/autotest_common.sh@1384 -- # local bs 00:24:09.532 18:22:19 ftl.ftl_bdevperf -- common/autotest_common.sh@1385 -- # local nb 00:24:09.532 18:22:19 ftl.ftl_bdevperf -- common/autotest_common.sh@1386 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b d82aa41a-7be7-4b35-84db-ab6dc7ea10ab 00:24:09.532 18:22:20 ftl.ftl_bdevperf -- common/autotest_common.sh@1386 -- # bdev_info='[ 00:24:09.532 { 00:24:09.532 "name": "d82aa41a-7be7-4b35-84db-ab6dc7ea10ab", 00:24:09.532 "aliases": [ 00:24:09.532 "lvs/nvme0n1p0" 00:24:09.532 ], 00:24:09.532 "product_name": "Logical Volume", 00:24:09.532 "block_size": 4096, 00:24:09.532 "num_blocks": 26476544, 00:24:09.532 "uuid": "d82aa41a-7be7-4b35-84db-ab6dc7ea10ab", 00:24:09.532 "assigned_rate_limits": { 00:24:09.532 "rw_ios_per_sec": 0, 00:24:09.532 "rw_mbytes_per_sec": 0, 00:24:09.532 "r_mbytes_per_sec": 0, 00:24:09.532 "w_mbytes_per_sec": 0 00:24:09.532 }, 00:24:09.532 "claimed": false, 00:24:09.532 "zoned": false, 00:24:09.532 "supported_io_types": { 00:24:09.532 "read": true, 00:24:09.532 "write": true, 00:24:09.532 "unmap": true, 00:24:09.532 "flush": false, 00:24:09.532 "reset": true, 00:24:09.532 "nvme_admin": false, 00:24:09.532 "nvme_io": false, 00:24:09.532 "nvme_io_md": false, 00:24:09.532 "write_zeroes": true, 00:24:09.532 "zcopy": false, 00:24:09.532 "get_zone_info": false, 00:24:09.532 "zone_management": false, 00:24:09.532 "zone_append": false, 00:24:09.532 "compare": false, 00:24:09.532 "compare_and_write": false, 00:24:09.532 "abort": false, 00:24:09.532 "seek_hole": true, 00:24:09.532 "seek_data": true, 00:24:09.532 "copy": false, 00:24:09.532 "nvme_iov_md": false 00:24:09.532 }, 00:24:09.532 "driver_specific": { 00:24:09.532 "lvol": { 00:24:09.532 "lvol_store_uuid": "2f98123b-bf1a-44e5-a926-f0e26e779530", 00:24:09.532 "base_bdev": "nvme0n1", 00:24:09.532 "thin_provision": true, 00:24:09.532 "num_allocated_clusters": 0, 00:24:09.532 "snapshot": false, 00:24:09.532 "clone": false, 00:24:09.532 "esnap_clone": false 00:24:09.532 } 00:24:09.532 } 00:24:09.532 } 00:24:09.532 ]' 00:24:09.532 18:22:20 ftl.ftl_bdevperf -- common/autotest_common.sh@1387 -- # jq '.[] .block_size' 00:24:09.532 18:22:20 ftl.ftl_bdevperf -- common/autotest_common.sh@1387 -- # bs=4096 00:24:09.532 18:22:20 ftl.ftl_bdevperf -- common/autotest_common.sh@1388 -- # jq '.[] .num_blocks' 00:24:09.792 18:22:20 ftl.ftl_bdevperf -- common/autotest_common.sh@1388 -- # nb=26476544 00:24:09.792 18:22:20 ftl.ftl_bdevperf -- common/autotest_common.sh@1391 -- # bdev_size=103424 00:24:09.792 18:22:20 ftl.ftl_bdevperf -- common/autotest_common.sh@1392 -- # echo 103424 00:24:09.792 18:22:20 ftl.ftl_bdevperf -- ftl/common.sh@48 -- # cache_size=5171 00:24:09.792 18:22:20 ftl.ftl_bdevperf -- ftl/common.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_split_create nvc0n1 -s 5171 1 00:24:09.792 18:22:20 ftl.ftl_bdevperf -- ftl/bdevperf.sh@23 -- # nv_cache=nvc0n1p0 00:24:09.792 18:22:20 ftl.ftl_bdevperf -- ftl/bdevperf.sh@25 -- # get_bdev_size d82aa41a-7be7-4b35-84db-ab6dc7ea10ab 00:24:09.792 18:22:20 ftl.ftl_bdevperf -- common/autotest_common.sh@1382 -- # local bdev_name=d82aa41a-7be7-4b35-84db-ab6dc7ea10ab 00:24:09.792 18:22:20 ftl.ftl_bdevperf -- common/autotest_common.sh@1383 -- # local bdev_info 00:24:09.792 18:22:20 ftl.ftl_bdevperf -- common/autotest_common.sh@1384 -- # local bs 00:24:09.792 18:22:20 ftl.ftl_bdevperf -- common/autotest_common.sh@1385 -- # local nb 00:24:09.792 18:22:20 ftl.ftl_bdevperf -- common/autotest_common.sh@1386 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b d82aa41a-7be7-4b35-84db-ab6dc7ea10ab 00:24:10.050 18:22:20 ftl.ftl_bdevperf -- common/autotest_common.sh@1386 -- # bdev_info='[ 00:24:10.050 { 00:24:10.050 "name": "d82aa41a-7be7-4b35-84db-ab6dc7ea10ab", 00:24:10.050 "aliases": [ 00:24:10.050 "lvs/nvme0n1p0" 00:24:10.050 ], 00:24:10.050 "product_name": "Logical Volume", 00:24:10.050 "block_size": 4096, 00:24:10.050 "num_blocks": 26476544, 00:24:10.050 "uuid": "d82aa41a-7be7-4b35-84db-ab6dc7ea10ab", 00:24:10.050 "assigned_rate_limits": { 00:24:10.050 "rw_ios_per_sec": 0, 00:24:10.050 "rw_mbytes_per_sec": 0, 00:24:10.050 "r_mbytes_per_sec": 0, 00:24:10.050 "w_mbytes_per_sec": 0 00:24:10.050 }, 00:24:10.050 "claimed": false, 00:24:10.050 "zoned": false, 00:24:10.050 "supported_io_types": { 00:24:10.050 "read": true, 00:24:10.050 "write": true, 00:24:10.050 "unmap": true, 00:24:10.050 "flush": false, 00:24:10.050 "reset": true, 00:24:10.050 "nvme_admin": false, 00:24:10.050 "nvme_io": false, 00:24:10.050 "nvme_io_md": false, 00:24:10.050 "write_zeroes": true, 00:24:10.050 "zcopy": false, 00:24:10.050 "get_zone_info": false, 00:24:10.050 "zone_management": false, 00:24:10.050 "zone_append": false, 00:24:10.051 "compare": false, 00:24:10.051 "compare_and_write": false, 00:24:10.051 "abort": false, 00:24:10.051 "seek_hole": true, 00:24:10.051 "seek_data": true, 00:24:10.051 "copy": false, 00:24:10.051 "nvme_iov_md": false 00:24:10.051 }, 00:24:10.051 "driver_specific": { 00:24:10.051 "lvol": { 00:24:10.051 "lvol_store_uuid": "2f98123b-bf1a-44e5-a926-f0e26e779530", 00:24:10.051 "base_bdev": "nvme0n1", 00:24:10.051 "thin_provision": true, 00:24:10.051 "num_allocated_clusters": 0, 00:24:10.051 "snapshot": false, 00:24:10.051 "clone": false, 00:24:10.051 "esnap_clone": false 00:24:10.051 } 00:24:10.051 } 00:24:10.051 } 00:24:10.051 ]' 00:24:10.051 18:22:20 ftl.ftl_bdevperf -- common/autotest_common.sh@1387 -- # jq '.[] .block_size' 00:24:10.051 18:22:20 ftl.ftl_bdevperf -- common/autotest_common.sh@1387 -- # bs=4096 00:24:10.051 18:22:20 ftl.ftl_bdevperf -- common/autotest_common.sh@1388 -- # jq '.[] .num_blocks' 00:24:10.310 18:22:20 ftl.ftl_bdevperf -- common/autotest_common.sh@1388 -- # nb=26476544 00:24:10.310 18:22:20 ftl.ftl_bdevperf -- common/autotest_common.sh@1391 -- # bdev_size=103424 00:24:10.310 18:22:20 ftl.ftl_bdevperf -- common/autotest_common.sh@1392 -- # echo 103424 00:24:10.310 18:22:20 ftl.ftl_bdevperf -- ftl/bdevperf.sh@25 -- # l2p_dram_size_mb=20 00:24:10.310 18:22:20 ftl.ftl_bdevperf -- ftl/bdevperf.sh@26 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -t 240 bdev_ftl_create -b ftl0 -d d82aa41a-7be7-4b35-84db-ab6dc7ea10ab -c nvc0n1p0 --l2p_dram_limit 20 00:24:10.310 [2024-12-06 18:22:20.834669] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:10.310 [2024-12-06 18:22:20.834882] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Check configuration 00:24:10.310 [2024-12-06 18:22:20.834911] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.004 ms 00:24:10.310 [2024-12-06 18:22:20.834926] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:10.310 [2024-12-06 18:22:20.835008] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:10.310 [2024-12-06 18:22:20.835024] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:24:10.310 [2024-12-06 18:22:20.835036] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.056 ms 00:24:10.310 [2024-12-06 18:22:20.835049] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:10.310 [2024-12-06 18:22:20.835071] mngt/ftl_mngt_bdev.c: 196:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using nvc0n1p0 as write buffer cache 00:24:10.310 [2024-12-06 18:22:20.836202] mngt/ftl_mngt_bdev.c: 236:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using bdev as NV Cache device 00:24:10.310 [2024-12-06 18:22:20.836229] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:10.310 [2024-12-06 18:22:20.836243] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:24:10.311 [2024-12-06 18:22:20.836254] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.167 ms 00:24:10.311 [2024-12-06 18:22:20.836276] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:10.311 [2024-12-06 18:22:20.836349] mngt/ftl_mngt_md.c: 570:ftl_mngt_superblock_init: *NOTICE*: [FTL][ftl0] Create new FTL, UUID 8db8d935-8591-48bc-bb0b-48c63bfc5975 00:24:10.311 [2024-12-06 18:22:20.837719] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:10.311 [2024-12-06 18:22:20.837751] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Default-initialize superblock 00:24:10.311 [2024-12-06 18:22:20.837771] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.016 ms 00:24:10.311 [2024-12-06 18:22:20.837781] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:10.311 [2024-12-06 18:22:20.845267] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:10.311 [2024-12-06 18:22:20.845312] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:24:10.311 [2024-12-06 18:22:20.845327] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 7.455 ms 00:24:10.311 [2024-12-06 18:22:20.845357] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:10.311 [2024-12-06 18:22:20.845468] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:10.311 [2024-12-06 18:22:20.845483] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:24:10.311 [2024-12-06 18:22:20.845501] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.089 ms 00:24:10.311 [2024-12-06 18:22:20.845511] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:10.311 [2024-12-06 18:22:20.845584] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:10.311 [2024-12-06 18:22:20.845596] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Register IO device 00:24:10.311 [2024-12-06 18:22:20.845609] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.008 ms 00:24:10.311 [2024-12-06 18:22:20.845619] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:10.311 [2024-12-06 18:22:20.845648] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:24:10.311 [2024-12-06 18:22:20.850519] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:10.311 [2024-12-06 18:22:20.850554] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:24:10.311 [2024-12-06 18:22:20.850566] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.892 ms 00:24:10.311 [2024-12-06 18:22:20.850599] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:10.311 [2024-12-06 18:22:20.850635] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:10.311 [2024-12-06 18:22:20.850650] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Decorate bands 00:24:10.311 [2024-12-06 18:22:20.850660] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.010 ms 00:24:10.311 [2024-12-06 18:22:20.850673] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:10.311 [2024-12-06 18:22:20.850707] ftl_layout.c: 613:ftl_layout_setup: *NOTICE*: [FTL][ftl0] FTL layout setup mode 1 00:24:10.311 [2024-12-06 18:22:20.850845] upgrade/ftl_sb_v5.c: 92:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob store 0x150 bytes 00:24:10.311 [2024-12-06 18:22:20.850859] upgrade/ftl_sb_v5.c: 101:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] base layout blob store 0x48 bytes 00:24:10.311 [2024-12-06 18:22:20.850875] upgrade/ftl_sb_v5.c: 109:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] layout blob store 0x190 bytes 00:24:10.311 [2024-12-06 18:22:20.850888] ftl_layout.c: 685:ftl_layout_setup: *NOTICE*: [FTL][ftl0] Base device capacity: 103424.00 MiB 00:24:10.311 [2024-12-06 18:22:20.850903] ftl_layout.c: 687:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache device capacity: 5171.00 MiB 00:24:10.311 [2024-12-06 18:22:20.850914] ftl_layout.c: 689:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P entries: 20971520 00:24:10.311 [2024-12-06 18:22:20.850927] ftl_layout.c: 690:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P address size: 4 00:24:10.311 [2024-12-06 18:22:20.850936] ftl_layout.c: 691:ftl_layout_setup: *NOTICE*: [FTL][ftl0] P2L checkpoint pages: 2048 00:24:10.311 [2024-12-06 18:22:20.850950] ftl_layout.c: 692:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache chunk count 5 00:24:10.311 [2024-12-06 18:22:20.850962] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:10.311 [2024-12-06 18:22:20.850975] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize layout 00:24:10.311 [2024-12-06 18:22:20.850985] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.257 ms 00:24:10.311 [2024-12-06 18:22:20.850997] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:10.311 [2024-12-06 18:22:20.851070] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:10.311 [2024-12-06 18:22:20.851088] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Verify layout 00:24:10.311 [2024-12-06 18:22:20.851098] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.056 ms 00:24:10.311 [2024-12-06 18:22:20.851113] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:10.311 [2024-12-06 18:22:20.851192] ftl_layout.c: 768:ftl_layout_dump: *NOTICE*: [FTL][ftl0] NV cache layout: 00:24:10.311 [2024-12-06 18:22:20.851209] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb 00:24:10.311 [2024-12-06 18:22:20.851219] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:24:10.311 [2024-12-06 18:22:20.851233] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:24:10.311 [2024-12-06 18:22:20.851243] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region l2p 00:24:10.311 [2024-12-06 18:22:20.851255] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.12 MiB 00:24:10.311 [2024-12-06 18:22:20.851264] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 80.00 MiB 00:24:10.311 [2024-12-06 18:22:20.851276] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md 00:24:10.311 [2024-12-06 18:22:20.851305] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.12 MiB 00:24:10.311 [2024-12-06 18:22:20.851317] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:24:10.311 [2024-12-06 18:22:20.851327] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md_mirror 00:24:10.311 [2024-12-06 18:22:20.851352] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.62 MiB 00:24:10.311 [2024-12-06 18:22:20.851361] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:24:10.311 [2024-12-06 18:22:20.851375] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md 00:24:10.311 [2024-12-06 18:22:20.851384] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.88 MiB 00:24:10.311 [2024-12-06 18:22:20.851398] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:24:10.311 [2024-12-06 18:22:20.851408] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md_mirror 00:24:10.311 [2024-12-06 18:22:20.851419] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 114.00 MiB 00:24:10.311 [2024-12-06 18:22:20.851429] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:24:10.311 [2024-12-06 18:22:20.851441] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l0 00:24:10.311 [2024-12-06 18:22:20.851451] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 81.12 MiB 00:24:10.311 [2024-12-06 18:22:20.851463] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:24:10.311 [2024-12-06 18:22:20.851472] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l1 00:24:10.311 [2024-12-06 18:22:20.851484] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 89.12 MiB 00:24:10.311 [2024-12-06 18:22:20.851493] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:24:10.311 [2024-12-06 18:22:20.851505] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l2 00:24:10.311 [2024-12-06 18:22:20.851514] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 97.12 MiB 00:24:10.311 [2024-12-06 18:22:20.851525] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:24:10.311 [2024-12-06 18:22:20.851535] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l3 00:24:10.311 [2024-12-06 18:22:20.851546] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 105.12 MiB 00:24:10.311 [2024-12-06 18:22:20.851556] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:24:10.311 [2024-12-06 18:22:20.851570] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md 00:24:10.311 [2024-12-06 18:22:20.851579] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.12 MiB 00:24:10.311 [2024-12-06 18:22:20.851591] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:24:10.311 [2024-12-06 18:22:20.851600] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md_mirror 00:24:10.311 [2024-12-06 18:22:20.851611] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.38 MiB 00:24:10.311 [2024-12-06 18:22:20.851620] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:24:10.311 [2024-12-06 18:22:20.851633] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log 00:24:10.311 [2024-12-06 18:22:20.851643] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.62 MiB 00:24:10.311 [2024-12-06 18:22:20.851654] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:24:10.311 [2024-12-06 18:22:20.851663] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log_mirror 00:24:10.311 [2024-12-06 18:22:20.851674] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.75 MiB 00:24:10.311 [2024-12-06 18:22:20.851683] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:24:10.311 [2024-12-06 18:22:20.851694] ftl_layout.c: 775:ftl_layout_dump: *NOTICE*: [FTL][ftl0] Base device layout: 00:24:10.311 [2024-12-06 18:22:20.851704] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb_mirror 00:24:10.311 [2024-12-06 18:22:20.851716] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:24:10.311 [2024-12-06 18:22:20.851726] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:24:10.311 [2024-12-06 18:22:20.851741] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region vmap 00:24:10.311 [2024-12-06 18:22:20.851750] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 102400.25 MiB 00:24:10.311 [2024-12-06 18:22:20.851762] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 3.38 MiB 00:24:10.311 [2024-12-06 18:22:20.851772] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region data_btm 00:24:10.311 [2024-12-06 18:22:20.851784] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.25 MiB 00:24:10.311 [2024-12-06 18:22:20.851794] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 102400.00 MiB 00:24:10.311 [2024-12-06 18:22:20.851807] upgrade/ftl_sb_v5.c: 408:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - nvc: 00:24:10.311 [2024-12-06 18:22:20.851819] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x0 ver:5 blk_offs:0x0 blk_sz:0x20 00:24:10.311 [2024-12-06 18:22:20.851833] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x2 ver:0 blk_offs:0x20 blk_sz:0x5000 00:24:10.311 [2024-12-06 18:22:20.851844] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x3 ver:2 blk_offs:0x5020 blk_sz:0x80 00:24:10.312 [2024-12-06 18:22:20.851857] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x4 ver:2 blk_offs:0x50a0 blk_sz:0x80 00:24:10.312 [2024-12-06 18:22:20.851867] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xa ver:2 blk_offs:0x5120 blk_sz:0x800 00:24:10.312 [2024-12-06 18:22:20.851880] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xb ver:2 blk_offs:0x5920 blk_sz:0x800 00:24:10.312 [2024-12-06 18:22:20.851890] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xc ver:2 blk_offs:0x6120 blk_sz:0x800 00:24:10.312 [2024-12-06 18:22:20.851909] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xd ver:2 blk_offs:0x6920 blk_sz:0x800 00:24:10.312 [2024-12-06 18:22:20.851920] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xe ver:0 blk_offs:0x7120 blk_sz:0x40 00:24:10.312 [2024-12-06 18:22:20.851942] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xf ver:0 blk_offs:0x7160 blk_sz:0x40 00:24:10.312 [2024-12-06 18:22:20.851953] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x10 ver:1 blk_offs:0x71a0 blk_sz:0x20 00:24:10.312 [2024-12-06 18:22:20.851967] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x11 ver:1 blk_offs:0x71c0 blk_sz:0x20 00:24:10.312 [2024-12-06 18:22:20.851978] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x6 ver:2 blk_offs:0x71e0 blk_sz:0x20 00:24:10.312 [2024-12-06 18:22:20.851991] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x7 ver:2 blk_offs:0x7200 blk_sz:0x20 00:24:10.312 [2024-12-06 18:22:20.852002] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x7220 blk_sz:0x13c0e0 00:24:10.312 [2024-12-06 18:22:20.852014] upgrade/ftl_sb_v5.c: 422:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - base dev: 00:24:10.312 [2024-12-06 18:22:20.852026] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x1 ver:5 blk_offs:0x0 blk_sz:0x20 00:24:10.312 [2024-12-06 18:22:20.852042] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x20 blk_sz:0x20 00:24:10.312 [2024-12-06 18:22:20.852053] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x9 ver:0 blk_offs:0x40 blk_sz:0x1900000 00:24:10.312 [2024-12-06 18:22:20.852066] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x5 ver:0 blk_offs:0x1900040 blk_sz:0x360 00:24:10.312 [2024-12-06 18:22:20.852076] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x19003a0 blk_sz:0x3fc60 00:24:10.312 [2024-12-06 18:22:20.852090] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:10.312 [2024-12-06 18:22:20.852101] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Layout upgrade 00:24:10.312 [2024-12-06 18:22:20.852114] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.949 ms 00:24:10.312 [2024-12-06 18:22:20.852125] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:10.312 [2024-12-06 18:22:20.852166] mngt/ftl_mngt_misc.c: 165:ftl_mngt_scrub_nv_cache: *NOTICE*: [FTL][ftl0] NV cache data region needs scrubbing, this may take a while. 00:24:10.312 [2024-12-06 18:22:20.852179] mngt/ftl_mngt_misc.c: 166:ftl_mngt_scrub_nv_cache: *NOTICE*: [FTL][ftl0] Scrubbing 5 chunks 00:24:14.505 [2024-12-06 18:22:24.372889] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:14.505 [2024-12-06 18:22:24.372960] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Scrub NV cache 00:24:14.505 [2024-12-06 18:22:24.372980] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3526.433 ms 00:24:14.505 [2024-12-06 18:22:24.373007] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:14.505 [2024-12-06 18:22:24.410166] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:14.505 [2024-12-06 18:22:24.410217] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:24:14.505 [2024-12-06 18:22:24.410236] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 36.924 ms 00:24:14.505 [2024-12-06 18:22:24.410247] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:14.505 [2024-12-06 18:22:24.410417] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:14.505 [2024-12-06 18:22:24.410432] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize band addresses 00:24:14.505 [2024-12-06 18:22:24.410449] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.068 ms 00:24:14.505 [2024-12-06 18:22:24.410460] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:14.505 [2024-12-06 18:22:24.466129] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:14.505 [2024-12-06 18:22:24.466179] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:24:14.505 [2024-12-06 18:22:24.466209] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 55.699 ms 00:24:14.505 [2024-12-06 18:22:24.466219] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:14.505 [2024-12-06 18:22:24.466272] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:14.505 [2024-12-06 18:22:24.466283] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:24:14.505 [2024-12-06 18:22:24.466309] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.006 ms 00:24:14.505 [2024-12-06 18:22:24.466322] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:14.505 [2024-12-06 18:22:24.466821] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:14.505 [2024-12-06 18:22:24.466836] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:24:14.505 [2024-12-06 18:22:24.466850] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.415 ms 00:24:14.505 [2024-12-06 18:22:24.466860] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:14.505 [2024-12-06 18:22:24.466967] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:14.505 [2024-12-06 18:22:24.466980] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:24:14.505 [2024-12-06 18:22:24.466998] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.087 ms 00:24:14.505 [2024-12-06 18:22:24.467008] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:14.506 [2024-12-06 18:22:24.485671] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:14.506 [2024-12-06 18:22:24.485707] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:24:14.506 [2024-12-06 18:22:24.485724] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 18.669 ms 00:24:14.506 [2024-12-06 18:22:24.485762] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:14.506 [2024-12-06 18:22:24.497365] ftl_l2p_cache.c: 458:ftl_l2p_cache_init: *NOTICE*: l2p maximum resident size is: 19 (of 20) MiB 00:24:14.506 [2024-12-06 18:22:24.503278] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:14.506 [2024-12-06 18:22:24.503313] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize L2P 00:24:14.506 [2024-12-06 18:22:24.503326] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 17.470 ms 00:24:14.506 [2024-12-06 18:22:24.503339] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:14.506 [2024-12-06 18:22:24.596512] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:14.506 [2024-12-06 18:22:24.596598] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Clear L2P 00:24:14.506 [2024-12-06 18:22:24.596632] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 93.292 ms 00:24:14.506 [2024-12-06 18:22:24.596646] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:14.506 [2024-12-06 18:22:24.596896] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:14.506 [2024-12-06 18:22:24.596920] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize band initialization 00:24:14.506 [2024-12-06 18:22:24.596932] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.168 ms 00:24:14.506 [2024-12-06 18:22:24.596948] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:14.506 [2024-12-06 18:22:24.635038] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:14.506 [2024-12-06 18:22:24.635103] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Save initial band info metadata 00:24:14.506 [2024-12-06 18:22:24.635120] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 38.073 ms 00:24:14.506 [2024-12-06 18:22:24.635150] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:14.506 [2024-12-06 18:22:24.670438] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:14.506 [2024-12-06 18:22:24.670482] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Save initial chunk info metadata 00:24:14.506 [2024-12-06 18:22:24.670497] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 35.300 ms 00:24:14.506 [2024-12-06 18:22:24.670510] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:14.506 [2024-12-06 18:22:24.671212] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:14.506 [2024-12-06 18:22:24.671239] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize P2L checkpointing 00:24:14.506 [2024-12-06 18:22:24.671251] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.666 ms 00:24:14.506 [2024-12-06 18:22:24.671274] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:14.506 [2024-12-06 18:22:24.770981] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:14.506 [2024-12-06 18:22:24.771174] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Wipe P2L region 00:24:14.506 [2024-12-06 18:22:24.771200] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 99.811 ms 00:24:14.506 [2024-12-06 18:22:24.771215] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:14.506 [2024-12-06 18:22:24.808475] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:14.506 [2024-12-06 18:22:24.808829] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Clear trim map 00:24:14.506 [2024-12-06 18:22:24.808862] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 37.225 ms 00:24:14.506 [2024-12-06 18:22:24.808877] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:14.506 [2024-12-06 18:22:24.846335] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:14.506 [2024-12-06 18:22:24.846409] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Clear trim log 00:24:14.506 [2024-12-06 18:22:24.846426] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 37.456 ms 00:24:14.506 [2024-12-06 18:22:24.846440] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:14.506 [2024-12-06 18:22:24.882853] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:14.506 [2024-12-06 18:22:24.883014] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL dirty state 00:24:14.506 [2024-12-06 18:22:24.883037] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 36.427 ms 00:24:14.506 [2024-12-06 18:22:24.883050] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:14.506 [2024-12-06 18:22:24.883091] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:14.506 [2024-12-06 18:22:24.883109] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Start core poller 00:24:14.506 [2024-12-06 18:22:24.883120] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.005 ms 00:24:14.506 [2024-12-06 18:22:24.883133] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:14.506 [2024-12-06 18:22:24.883230] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:14.506 [2024-12-06 18:22:24.883245] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize initialization 00:24:14.506 [2024-12-06 18:22:24.883256] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.032 ms 00:24:14.506 [2024-12-06 18:22:24.883285] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:14.506 [2024-12-06 18:22:24.884531] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL startup', duration = 4056.006 ms, result 0 00:24:14.506 { 00:24:14.506 "name": "ftl0", 00:24:14.506 "uuid": "8db8d935-8591-48bc-bb0b-48c63bfc5975" 00:24:14.506 } 00:24:14.506 18:22:24 ftl.ftl_bdevperf -- ftl/bdevperf.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_ftl_get_stats -b ftl0 00:24:14.506 18:22:24 ftl.ftl_bdevperf -- ftl/bdevperf.sh@28 -- # jq -r .name 00:24:14.506 18:22:24 ftl.ftl_bdevperf -- ftl/bdevperf.sh@28 -- # grep -qw ftl0 00:24:14.763 18:22:25 ftl.ftl_bdevperf -- ftl/bdevperf.sh@30 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py perform_tests -q 1 -w randwrite -t 4 -o 69632 00:24:14.763 [2024-12-06 18:22:25.212348] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on ftl0 00:24:14.763 I/O size of 69632 is greater than zero copy threshold (65536). 00:24:14.763 Zero copy mechanism will not be used. 00:24:14.763 Running I/O for 4 seconds... 00:24:17.074 1457.00 IOPS, 96.75 MiB/s [2024-12-06T18:22:28.219Z] 1478.50 IOPS, 98.18 MiB/s [2024-12-06T18:22:29.597Z] 1520.67 IOPS, 100.98 MiB/s [2024-12-06T18:22:29.597Z] 1552.75 IOPS, 103.11 MiB/s 00:24:19.021 Latency(us) 00:24:19.021 [2024-12-06T18:22:29.597Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:24:19.021 Job: ftl0 (Core Mask 0x1, workload: randwrite, depth: 1, IO size: 69632) 00:24:19.021 ftl0 : 4.00 1552.39 103.09 0.00 0.00 673.17 227.01 2302.97 00:24:19.021 [2024-12-06T18:22:29.597Z] =================================================================================================================== 00:24:19.021 [2024-12-06T18:22:29.597Z] Total : 1552.39 103.09 0.00 0.00 673.17 227.01 2302.97 00:24:19.021 { 00:24:19.021 "results": [ 00:24:19.021 { 00:24:19.021 "job": "ftl0", 00:24:19.021 "core_mask": "0x1", 00:24:19.021 "workload": "randwrite", 00:24:19.021 "status": "finished", 00:24:19.021 "queue_depth": 1, 00:24:19.021 "io_size": 69632, 00:24:19.021 "runtime": 4.001584, 00:24:19.021 "iops": 1552.3852554388461, 00:24:19.021 "mibps": 103.08808336898588, 00:24:19.021 "io_failed": 0, 00:24:19.021 "io_timeout": 0, 00:24:19.021 "avg_latency_us": 673.1704295611291, 00:24:19.021 "min_latency_us": 227.00722891566264, 00:24:19.021 "max_latency_us": 2302.971887550201 00:24:19.021 } 00:24:19.021 ], 00:24:19.021 "core_count": 1 00:24:19.021 } 00:24:19.021 [2024-12-06 18:22:29.217253] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on ftl0 00:24:19.021 18:22:29 ftl.ftl_bdevperf -- ftl/bdevperf.sh@31 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py perform_tests -q 128 -w randwrite -t 4 -o 4096 00:24:19.021 [2024-12-06 18:22:29.330319] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on ftl0 00:24:19.021 Running I/O for 4 seconds... 00:24:20.895 12243.00 IOPS, 47.82 MiB/s [2024-12-06T18:22:32.417Z] 12017.50 IOPS, 46.94 MiB/s [2024-12-06T18:22:33.372Z] 11791.33 IOPS, 46.06 MiB/s [2024-12-06T18:22:33.372Z] 11598.50 IOPS, 45.31 MiB/s 00:24:22.796 Latency(us) 00:24:22.796 [2024-12-06T18:22:33.372Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:24:22.796 Job: ftl0 (Core Mask 0x1, workload: randwrite, depth: 128, IO size: 4096) 00:24:22.796 ftl0 : 4.01 11587.10 45.26 0.00 0.00 11024.94 225.36 30320.27 00:24:22.796 [2024-12-06T18:22:33.372Z] =================================================================================================================== 00:24:22.796 [2024-12-06T18:22:33.372Z] Total : 11587.10 45.26 0.00 0.00 11024.94 0.00 30320.27 00:24:22.796 { 00:24:22.796 "results": [ 00:24:22.796 { 00:24:22.796 "job": "ftl0", 00:24:22.796 "core_mask": "0x1", 00:24:22.796 "workload": "randwrite", 00:24:22.796 "status": "finished", 00:24:22.796 "queue_depth": 128, 00:24:22.796 "io_size": 4096, 00:24:22.796 "runtime": 4.014723, 00:24:22.796 "iops": 11587.1007788084, 00:24:22.796 "mibps": 45.262112417220315, 00:24:22.796 "io_failed": 0, 00:24:22.796 "io_timeout": 0, 00:24:22.796 "avg_latency_us": 11024.937181827765, 00:24:22.796 "min_latency_us": 225.36224899598395, 00:24:22.796 "max_latency_us": 30320.269879518073 00:24:22.796 } 00:24:22.796 ], 00:24:22.796 "core_count": 1 00:24:22.796 } 00:24:22.796 [2024-12-06 18:22:33.348856] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on ftl0 00:24:22.796 18:22:33 ftl.ftl_bdevperf -- ftl/bdevperf.sh@32 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py perform_tests -q 128 -w verify -t 4 -o 4096 00:24:23.056 [2024-12-06 18:22:33.477829] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on ftl0 00:24:23.056 Running I/O for 4 seconds... 00:24:24.931 8536.00 IOPS, 33.34 MiB/s [2024-12-06T18:22:36.884Z] 8708.50 IOPS, 34.02 MiB/s [2024-12-06T18:22:37.819Z] 8405.33 IOPS, 32.83 MiB/s [2024-12-06T18:22:37.819Z] 8484.00 IOPS, 33.14 MiB/s 00:24:27.243 Latency(us) 00:24:27.243 [2024-12-06T18:22:37.819Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:24:27.243 Job: ftl0 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:24:27.243 Verification LBA range: start 0x0 length 0x1400000 00:24:27.243 ftl0 : 4.01 8494.88 33.18 0.00 0.00 15021.34 269.78 18423.78 00:24:27.243 [2024-12-06T18:22:37.819Z] =================================================================================================================== 00:24:27.243 [2024-12-06T18:22:37.819Z] Total : 8494.88 33.18 0.00 0.00 15021.34 0.00 18423.78 00:24:27.243 [2024-12-06 18:22:37.500576] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on ftl0 00:24:27.243 { 00:24:27.243 "results": [ 00:24:27.243 { 00:24:27.243 "job": "ftl0", 00:24:27.243 "core_mask": "0x1", 00:24:27.243 "workload": "verify", 00:24:27.243 "status": "finished", 00:24:27.243 "verify_range": { 00:24:27.243 "start": 0, 00:24:27.243 "length": 20971520 00:24:27.243 }, 00:24:27.243 "queue_depth": 128, 00:24:27.243 "io_size": 4096, 00:24:27.243 "runtime": 4.009829, 00:24:27.243 "iops": 8494.875966032467, 00:24:27.243 "mibps": 33.18310924231432, 00:24:27.243 "io_failed": 0, 00:24:27.243 "io_timeout": 0, 00:24:27.243 "avg_latency_us": 15021.34186578684, 00:24:27.243 "min_latency_us": 269.7767068273092, 00:24:27.243 "max_latency_us": 18423.775100401606 00:24:27.243 } 00:24:27.243 ], 00:24:27.243 "core_count": 1 00:24:27.243 } 00:24:27.243 18:22:37 ftl.ftl_bdevperf -- ftl/bdevperf.sh@34 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_ftl_delete -b ftl0 00:24:27.243 [2024-12-06 18:22:37.699811] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:27.243 [2024-12-06 18:22:37.699874] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinit core IO channel 00:24:27.243 [2024-12-06 18:22:37.699891] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.004 ms 00:24:27.243 [2024-12-06 18:22:37.699905] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:27.243 [2024-12-06 18:22:37.699930] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:24:27.243 [2024-12-06 18:22:37.704364] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:27.243 [2024-12-06 18:22:37.704399] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Unregister IO device 00:24:27.243 [2024-12-06 18:22:37.704416] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.418 ms 00:24:27.243 [2024-12-06 18:22:37.704427] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:27.243 [2024-12-06 18:22:37.706339] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:27.243 [2024-12-06 18:22:37.706532] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Stop core poller 00:24:27.243 [2024-12-06 18:22:37.706566] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.885 ms 00:24:27.243 [2024-12-06 18:22:37.706578] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:27.501 [2024-12-06 18:22:37.916944] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:27.501 [2024-12-06 18:22:37.917016] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist L2P 00:24:27.501 [2024-12-06 18:22:37.917042] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 210.660 ms 00:24:27.502 [2024-12-06 18:22:37.917054] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:27.502 [2024-12-06 18:22:37.922199] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:27.502 [2024-12-06 18:22:37.922234] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finish L2P trims 00:24:27.502 [2024-12-06 18:22:37.922260] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 5.109 ms 00:24:27.502 [2024-12-06 18:22:37.922281] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:27.502 [2024-12-06 18:22:37.959681] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:27.502 [2024-12-06 18:22:37.959986] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist NV cache metadata 00:24:27.502 [2024-12-06 18:22:37.960020] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 37.367 ms 00:24:27.502 [2024-12-06 18:22:37.960031] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:27.502 [2024-12-06 18:22:37.983902] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:27.502 [2024-12-06 18:22:37.983979] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist valid map metadata 00:24:27.502 [2024-12-06 18:22:37.984000] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 23.813 ms 00:24:27.502 [2024-12-06 18:22:37.984027] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:27.502 [2024-12-06 18:22:37.984215] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:27.502 [2024-12-06 18:22:37.984229] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist P2L metadata 00:24:27.502 [2024-12-06 18:22:37.984246] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.103 ms 00:24:27.502 [2024-12-06 18:22:37.984256] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:27.502 [2024-12-06 18:22:38.020929] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:27.502 [2024-12-06 18:22:38.020972] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist band info metadata 00:24:27.502 [2024-12-06 18:22:38.020990] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 36.687 ms 00:24:27.502 [2024-12-06 18:22:38.021000] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:27.502 [2024-12-06 18:22:38.057203] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:27.502 [2024-12-06 18:22:38.057261] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist trim metadata 00:24:27.502 [2024-12-06 18:22:38.057295] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 36.216 ms 00:24:27.502 [2024-12-06 18:22:38.057306] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:27.761 [2024-12-06 18:22:38.093454] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:27.761 [2024-12-06 18:22:38.093496] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist superblock 00:24:27.761 [2024-12-06 18:22:38.093514] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 36.160 ms 00:24:27.761 [2024-12-06 18:22:38.093525] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:27.761 [2024-12-06 18:22:38.130299] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:27.761 [2024-12-06 18:22:38.130345] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL clean state 00:24:27.761 [2024-12-06 18:22:38.130367] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 36.733 ms 00:24:27.761 [2024-12-06 18:22:38.130385] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:27.761 [2024-12-06 18:22:38.130429] ftl_debug.c: 165:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Bands validity: 00:24:27.761 [2024-12-06 18:22:38.130447] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 1: 0 / 261120 wr_cnt: 0 state: free 00:24:27.761 [2024-12-06 18:22:38.130463] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 2: 0 / 261120 wr_cnt: 0 state: free 00:24:27.761 [2024-12-06 18:22:38.130475] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 3: 0 / 261120 wr_cnt: 0 state: free 00:24:27.761 [2024-12-06 18:22:38.130489] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 4: 0 / 261120 wr_cnt: 0 state: free 00:24:27.761 [2024-12-06 18:22:38.130500] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 5: 0 / 261120 wr_cnt: 0 state: free 00:24:27.761 [2024-12-06 18:22:38.130514] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 6: 0 / 261120 wr_cnt: 0 state: free 00:24:27.761 [2024-12-06 18:22:38.130525] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 7: 0 / 261120 wr_cnt: 0 state: free 00:24:27.761 [2024-12-06 18:22:38.130539] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 8: 0 / 261120 wr_cnt: 0 state: free 00:24:27.761 [2024-12-06 18:22:38.130550] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 9: 0 / 261120 wr_cnt: 0 state: free 00:24:27.761 [2024-12-06 18:22:38.130563] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 10: 0 / 261120 wr_cnt: 0 state: free 00:24:27.761 [2024-12-06 18:22:38.130574] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 11: 0 / 261120 wr_cnt: 0 state: free 00:24:27.761 [2024-12-06 18:22:38.130588] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 12: 0 / 261120 wr_cnt: 0 state: free 00:24:27.761 [2024-12-06 18:22:38.130599] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 13: 0 / 261120 wr_cnt: 0 state: free 00:24:27.761 [2024-12-06 18:22:38.130615] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 14: 0 / 261120 wr_cnt: 0 state: free 00:24:27.761 [2024-12-06 18:22:38.130626] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 15: 0 / 261120 wr_cnt: 0 state: free 00:24:27.761 [2024-12-06 18:22:38.130640] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 16: 0 / 261120 wr_cnt: 0 state: free 00:24:27.761 [2024-12-06 18:22:38.130651] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 17: 0 / 261120 wr_cnt: 0 state: free 00:24:27.761 [2024-12-06 18:22:38.130664] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 18: 0 / 261120 wr_cnt: 0 state: free 00:24:27.761 [2024-12-06 18:22:38.130676] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 19: 0 / 261120 wr_cnt: 0 state: free 00:24:27.761 [2024-12-06 18:22:38.130692] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 20: 0 / 261120 wr_cnt: 0 state: free 00:24:27.762 [2024-12-06 18:22:38.130703] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 21: 0 / 261120 wr_cnt: 0 state: free 00:24:27.762 [2024-12-06 18:22:38.130716] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 22: 0 / 261120 wr_cnt: 0 state: free 00:24:27.762 [2024-12-06 18:22:38.130727] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 23: 0 / 261120 wr_cnt: 0 state: free 00:24:27.762 [2024-12-06 18:22:38.130740] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 24: 0 / 261120 wr_cnt: 0 state: free 00:24:27.762 [2024-12-06 18:22:38.130751] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 25: 0 / 261120 wr_cnt: 0 state: free 00:24:27.762 [2024-12-06 18:22:38.130765] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 26: 0 / 261120 wr_cnt: 0 state: free 00:24:27.762 [2024-12-06 18:22:38.130777] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 27: 0 / 261120 wr_cnt: 0 state: free 00:24:27.762 [2024-12-06 18:22:38.130791] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 28: 0 / 261120 wr_cnt: 0 state: free 00:24:27.762 [2024-12-06 18:22:38.130802] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 29: 0 / 261120 wr_cnt: 0 state: free 00:24:27.762 [2024-12-06 18:22:38.130818] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 30: 0 / 261120 wr_cnt: 0 state: free 00:24:27.762 [2024-12-06 18:22:38.130828] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 31: 0 / 261120 wr_cnt: 0 state: free 00:24:27.762 [2024-12-06 18:22:38.130842] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 32: 0 / 261120 wr_cnt: 0 state: free 00:24:27.762 [2024-12-06 18:22:38.130853] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 33: 0 / 261120 wr_cnt: 0 state: free 00:24:27.762 [2024-12-06 18:22:38.130866] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 34: 0 / 261120 wr_cnt: 0 state: free 00:24:27.762 [2024-12-06 18:22:38.130877] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 35: 0 / 261120 wr_cnt: 0 state: free 00:24:27.762 [2024-12-06 18:22:38.130890] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 36: 0 / 261120 wr_cnt: 0 state: free 00:24:27.762 [2024-12-06 18:22:38.130901] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 37: 0 / 261120 wr_cnt: 0 state: free 00:24:27.762 [2024-12-06 18:22:38.130915] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 38: 0 / 261120 wr_cnt: 0 state: free 00:24:27.762 [2024-12-06 18:22:38.130936] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 39: 0 / 261120 wr_cnt: 0 state: free 00:24:27.762 [2024-12-06 18:22:38.130951] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 40: 0 / 261120 wr_cnt: 0 state: free 00:24:27.762 [2024-12-06 18:22:38.130962] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 41: 0 / 261120 wr_cnt: 0 state: free 00:24:27.762 [2024-12-06 18:22:38.130976] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 42: 0 / 261120 wr_cnt: 0 state: free 00:24:27.762 [2024-12-06 18:22:38.130987] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 43: 0 / 261120 wr_cnt: 0 state: free 00:24:27.762 [2024-12-06 18:22:38.131000] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 44: 0 / 261120 wr_cnt: 0 state: free 00:24:27.762 [2024-12-06 18:22:38.131011] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 45: 0 / 261120 wr_cnt: 0 state: free 00:24:27.762 [2024-12-06 18:22:38.131038] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 46: 0 / 261120 wr_cnt: 0 state: free 00:24:27.762 [2024-12-06 18:22:38.131050] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 47: 0 / 261120 wr_cnt: 0 state: free 00:24:27.762 [2024-12-06 18:22:38.131063] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 48: 0 / 261120 wr_cnt: 0 state: free 00:24:27.762 [2024-12-06 18:22:38.131074] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 49: 0 / 261120 wr_cnt: 0 state: free 00:24:27.762 [2024-12-06 18:22:38.131087] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 50: 0 / 261120 wr_cnt: 0 state: free 00:24:27.762 [2024-12-06 18:22:38.131098] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 51: 0 / 261120 wr_cnt: 0 state: free 00:24:27.762 [2024-12-06 18:22:38.131112] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 52: 0 / 261120 wr_cnt: 0 state: free 00:24:27.762 [2024-12-06 18:22:38.131123] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 53: 0 / 261120 wr_cnt: 0 state: free 00:24:27.762 [2024-12-06 18:22:38.131137] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 54: 0 / 261120 wr_cnt: 0 state: free 00:24:27.762 [2024-12-06 18:22:38.131148] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 55: 0 / 261120 wr_cnt: 0 state: free 00:24:27.762 [2024-12-06 18:22:38.131162] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 56: 0 / 261120 wr_cnt: 0 state: free 00:24:27.762 [2024-12-06 18:22:38.131173] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 57: 0 / 261120 wr_cnt: 0 state: free 00:24:27.762 [2024-12-06 18:22:38.131186] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 58: 0 / 261120 wr_cnt: 0 state: free 00:24:27.762 [2024-12-06 18:22:38.131197] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 59: 0 / 261120 wr_cnt: 0 state: free 00:24:27.762 [2024-12-06 18:22:38.131210] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 60: 0 / 261120 wr_cnt: 0 state: free 00:24:27.762 [2024-12-06 18:22:38.131221] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 61: 0 / 261120 wr_cnt: 0 state: free 00:24:27.762 [2024-12-06 18:22:38.131236] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 62: 0 / 261120 wr_cnt: 0 state: free 00:24:27.762 [2024-12-06 18:22:38.131247] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 63: 0 / 261120 wr_cnt: 0 state: free 00:24:27.762 [2024-12-06 18:22:38.131261] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 64: 0 / 261120 wr_cnt: 0 state: free 00:24:27.762 [2024-12-06 18:22:38.131295] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 65: 0 / 261120 wr_cnt: 0 state: free 00:24:27.762 [2024-12-06 18:22:38.131309] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 66: 0 / 261120 wr_cnt: 0 state: free 00:24:27.762 [2024-12-06 18:22:38.131320] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 67: 0 / 261120 wr_cnt: 0 state: free 00:24:27.762 [2024-12-06 18:22:38.131352] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 68: 0 / 261120 wr_cnt: 0 state: free 00:24:27.762 [2024-12-06 18:22:38.131363] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 69: 0 / 261120 wr_cnt: 0 state: free 00:24:27.762 [2024-12-06 18:22:38.131376] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 70: 0 / 261120 wr_cnt: 0 state: free 00:24:27.762 [2024-12-06 18:22:38.131388] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 71: 0 / 261120 wr_cnt: 0 state: free 00:24:27.762 [2024-12-06 18:22:38.131403] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 72: 0 / 261120 wr_cnt: 0 state: free 00:24:27.762 [2024-12-06 18:22:38.131414] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 73: 0 / 261120 wr_cnt: 0 state: free 00:24:27.762 [2024-12-06 18:22:38.131430] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 74: 0 / 261120 wr_cnt: 0 state: free 00:24:27.762 [2024-12-06 18:22:38.131441] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 75: 0 / 261120 wr_cnt: 0 state: free 00:24:27.762 [2024-12-06 18:22:38.131455] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 76: 0 / 261120 wr_cnt: 0 state: free 00:24:27.762 [2024-12-06 18:22:38.131466] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 77: 0 / 261120 wr_cnt: 0 state: free 00:24:27.762 [2024-12-06 18:22:38.131481] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 78: 0 / 261120 wr_cnt: 0 state: free 00:24:27.762 [2024-12-06 18:22:38.131492] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 79: 0 / 261120 wr_cnt: 0 state: free 00:24:27.762 [2024-12-06 18:22:38.131505] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 80: 0 / 261120 wr_cnt: 0 state: free 00:24:27.762 [2024-12-06 18:22:38.131516] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 81: 0 / 261120 wr_cnt: 0 state: free 00:24:27.762 [2024-12-06 18:22:38.131529] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 82: 0 / 261120 wr_cnt: 0 state: free 00:24:27.762 [2024-12-06 18:22:38.131541] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 83: 0 / 261120 wr_cnt: 0 state: free 00:24:27.762 [2024-12-06 18:22:38.131555] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 84: 0 / 261120 wr_cnt: 0 state: free 00:24:27.762 [2024-12-06 18:22:38.131566] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 85: 0 / 261120 wr_cnt: 0 state: free 00:24:27.762 [2024-12-06 18:22:38.131579] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 86: 0 / 261120 wr_cnt: 0 state: free 00:24:27.762 [2024-12-06 18:22:38.131589] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 87: 0 / 261120 wr_cnt: 0 state: free 00:24:27.762 [2024-12-06 18:22:38.131602] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 88: 0 / 261120 wr_cnt: 0 state: free 00:24:27.762 [2024-12-06 18:22:38.131613] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 89: 0 / 261120 wr_cnt: 0 state: free 00:24:27.762 [2024-12-06 18:22:38.131626] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 90: 0 / 261120 wr_cnt: 0 state: free 00:24:27.762 [2024-12-06 18:22:38.131636] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 91: 0 / 261120 wr_cnt: 0 state: free 00:24:27.762 [2024-12-06 18:22:38.131649] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 92: 0 / 261120 wr_cnt: 0 state: free 00:24:27.762 [2024-12-06 18:22:38.131660] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 93: 0 / 261120 wr_cnt: 0 state: free 00:24:27.762 [2024-12-06 18:22:38.131676] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 94: 0 / 261120 wr_cnt: 0 state: free 00:24:27.762 [2024-12-06 18:22:38.131686] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 95: 0 / 261120 wr_cnt: 0 state: free 00:24:27.762 [2024-12-06 18:22:38.131700] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 96: 0 / 261120 wr_cnt: 0 state: free 00:24:27.763 [2024-12-06 18:22:38.131710] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 97: 0 / 261120 wr_cnt: 0 state: free 00:24:27.763 [2024-12-06 18:22:38.131725] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 98: 0 / 261120 wr_cnt: 0 state: free 00:24:27.763 [2024-12-06 18:22:38.131736] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 99: 0 / 261120 wr_cnt: 0 state: free 00:24:27.763 [2024-12-06 18:22:38.131748] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 100: 0 / 261120 wr_cnt: 0 state: free 00:24:27.763 [2024-12-06 18:22:38.131766] ftl_debug.c: 211:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] 00:24:27.763 [2024-12-06 18:22:38.131778] ftl_debug.c: 212:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] device UUID: 8db8d935-8591-48bc-bb0b-48c63bfc5975 00:24:27.763 [2024-12-06 18:22:38.131792] ftl_debug.c: 213:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total valid LBAs: 0 00:24:27.763 [2024-12-06 18:22:38.131804] ftl_debug.c: 214:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total writes: 960 00:24:27.763 [2024-12-06 18:22:38.131814] ftl_debug.c: 215:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] user writes: 0 00:24:27.763 [2024-12-06 18:22:38.131827] ftl_debug.c: 216:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] WAF: inf 00:24:27.763 [2024-12-06 18:22:38.131837] ftl_debug.c: 218:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] limits: 00:24:27.763 [2024-12-06 18:22:38.131850] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] crit: 0 00:24:27.763 [2024-12-06 18:22:38.131861] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] high: 0 00:24:27.763 [2024-12-06 18:22:38.131875] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] low: 0 00:24:27.763 [2024-12-06 18:22:38.131884] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] start: 0 00:24:27.763 [2024-12-06 18:22:38.131896] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:27.763 [2024-12-06 18:22:38.131906] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Dump statistics 00:24:27.763 [2024-12-06 18:22:38.131920] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.471 ms 00:24:27.763 [2024-12-06 18:22:38.131931] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:27.763 [2024-12-06 18:22:38.152268] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:27.763 [2024-12-06 18:22:38.152328] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize L2P 00:24:27.763 [2024-12-06 18:22:38.152346] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 20.308 ms 00:24:27.763 [2024-12-06 18:22:38.152357] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:27.763 [2024-12-06 18:22:38.152862] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:27.763 [2024-12-06 18:22:38.152876] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize P2L checkpointing 00:24:27.763 [2024-12-06 18:22:38.152889] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.474 ms 00:24:27.763 [2024-12-06 18:22:38.152900] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:27.763 [2024-12-06 18:22:38.209781] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:24:27.763 [2024-12-06 18:22:38.209833] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:24:27.763 [2024-12-06 18:22:38.209854] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:24:27.763 [2024-12-06 18:22:38.209865] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:27.763 [2024-12-06 18:22:38.209940] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:24:27.763 [2024-12-06 18:22:38.209951] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:24:27.763 [2024-12-06 18:22:38.209964] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:24:27.763 [2024-12-06 18:22:38.209974] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:27.763 [2024-12-06 18:22:38.210104] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:24:27.763 [2024-12-06 18:22:38.210118] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:24:27.763 [2024-12-06 18:22:38.210131] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:24:27.763 [2024-12-06 18:22:38.210141] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:27.763 [2024-12-06 18:22:38.210161] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:24:27.763 [2024-12-06 18:22:38.210177] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:24:27.763 [2024-12-06 18:22:38.210190] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:24:27.763 [2024-12-06 18:22:38.210200] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:27.763 [2024-12-06 18:22:38.333944] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:24:27.763 [2024-12-06 18:22:38.333998] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:24:27.763 [2024-12-06 18:22:38.334020] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:24:27.763 [2024-12-06 18:22:38.334031] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:28.022 [2024-12-06 18:22:38.434944] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:24:28.022 [2024-12-06 18:22:38.435010] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:24:28.022 [2024-12-06 18:22:38.435029] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:24:28.022 [2024-12-06 18:22:38.435039] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:28.022 [2024-12-06 18:22:38.435162] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:24:28.022 [2024-12-06 18:22:38.435176] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:24:28.022 [2024-12-06 18:22:38.435189] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:24:28.022 [2024-12-06 18:22:38.435199] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:28.022 [2024-12-06 18:22:38.435255] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:24:28.022 [2024-12-06 18:22:38.435290] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:24:28.022 [2024-12-06 18:22:38.435303] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:24:28.022 [2024-12-06 18:22:38.435314] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:28.022 [2024-12-06 18:22:38.435430] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:24:28.022 [2024-12-06 18:22:38.435447] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:24:28.022 [2024-12-06 18:22:38.435464] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:24:28.022 [2024-12-06 18:22:38.435476] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:28.022 [2024-12-06 18:22:38.435515] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:24:28.022 [2024-12-06 18:22:38.435527] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize superblock 00:24:28.022 [2024-12-06 18:22:38.435540] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:24:28.022 [2024-12-06 18:22:38.435549] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:28.022 [2024-12-06 18:22:38.435591] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:24:28.022 [2024-12-06 18:22:38.435606] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:24:28.022 [2024-12-06 18:22:38.435618] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:24:28.022 [2024-12-06 18:22:38.435639] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:28.022 [2024-12-06 18:22:38.435684] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:24:28.022 [2024-12-06 18:22:38.435701] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:24:28.022 [2024-12-06 18:22:38.435714] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:24:28.022 [2024-12-06 18:22:38.435724] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:28.022 [2024-12-06 18:22:38.435850] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL shutdown', duration = 737.200 ms, result 0 00:24:28.022 true 00:24:28.022 18:22:38 ftl.ftl_bdevperf -- ftl/bdevperf.sh@36 -- # killprocess 77768 00:24:28.022 18:22:38 ftl.ftl_bdevperf -- common/autotest_common.sh@954 -- # '[' -z 77768 ']' 00:24:28.022 18:22:38 ftl.ftl_bdevperf -- common/autotest_common.sh@958 -- # kill -0 77768 00:24:28.022 18:22:38 ftl.ftl_bdevperf -- common/autotest_common.sh@959 -- # uname 00:24:28.022 18:22:38 ftl.ftl_bdevperf -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:24:28.022 18:22:38 ftl.ftl_bdevperf -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 77768 00:24:28.022 killing process with pid 77768 00:24:28.022 Received shutdown signal, test time was about 4.000000 seconds 00:24:28.022 00:24:28.022 Latency(us) 00:24:28.022 [2024-12-06T18:22:38.598Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:24:28.022 [2024-12-06T18:22:38.598Z] =================================================================================================================== 00:24:28.022 [2024-12-06T18:22:38.598Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:24:28.022 18:22:38 ftl.ftl_bdevperf -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:24:28.022 18:22:38 ftl.ftl_bdevperf -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:24:28.022 18:22:38 ftl.ftl_bdevperf -- common/autotest_common.sh@972 -- # echo 'killing process with pid 77768' 00:24:28.022 18:22:38 ftl.ftl_bdevperf -- common/autotest_common.sh@973 -- # kill 77768 00:24:28.022 18:22:38 ftl.ftl_bdevperf -- common/autotest_common.sh@978 -- # wait 77768 00:24:32.208 18:22:42 ftl.ftl_bdevperf -- ftl/bdevperf.sh@37 -- # trap - SIGINT SIGTERM EXIT 00:24:32.208 18:22:42 ftl.ftl_bdevperf -- ftl/bdevperf.sh@39 -- # remove_shm 00:24:32.208 Remove shared memory files 00:24:32.208 18:22:42 ftl.ftl_bdevperf -- ftl/common.sh@204 -- # echo Remove shared memory files 00:24:32.208 18:22:42 ftl.ftl_bdevperf -- ftl/common.sh@205 -- # rm -f rm -f 00:24:32.208 18:22:42 ftl.ftl_bdevperf -- ftl/common.sh@206 -- # rm -f rm -f 00:24:32.208 18:22:42 ftl.ftl_bdevperf -- ftl/common.sh@207 -- # rm -f rm -f 00:24:32.208 18:22:42 ftl.ftl_bdevperf -- ftl/common.sh@208 -- # rm -f rm -f /dev/shm/iscsi 00:24:32.208 18:22:42 ftl.ftl_bdevperf -- ftl/common.sh@209 -- # rm -f rm -f 00:24:32.208 ************************************ 00:24:32.208 END TEST ftl_bdevperf 00:24:32.208 ************************************ 00:24:32.208 00:24:32.208 real 0m25.730s 00:24:32.208 user 0m28.370s 00:24:32.208 sys 0m1.246s 00:24:32.208 18:22:42 ftl.ftl_bdevperf -- common/autotest_common.sh@1130 -- # xtrace_disable 00:24:32.208 18:22:42 ftl.ftl_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:24:32.208 18:22:42 ftl -- ftl/ftl.sh@75 -- # run_test ftl_trim /home/vagrant/spdk_repo/spdk/test/ftl/trim.sh 0000:00:11.0 0000:00:10.0 00:24:32.208 18:22:42 ftl -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:24:32.208 18:22:42 ftl -- common/autotest_common.sh@1111 -- # xtrace_disable 00:24:32.208 18:22:42 ftl -- common/autotest_common.sh@10 -- # set +x 00:24:32.208 ************************************ 00:24:32.208 START TEST ftl_trim 00:24:32.208 ************************************ 00:24:32.208 18:22:42 ftl.ftl_trim -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/ftl/trim.sh 0000:00:11.0 0000:00:10.0 00:24:32.208 * Looking for test storage... 00:24:32.208 * Found test storage at /home/vagrant/spdk_repo/spdk/test/ftl 00:24:32.208 18:22:42 ftl.ftl_trim -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:24:32.208 18:22:42 ftl.ftl_trim -- common/autotest_common.sh@1711 -- # lcov --version 00:24:32.208 18:22:42 ftl.ftl_trim -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:24:32.208 18:22:42 ftl.ftl_trim -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:24:32.208 18:22:42 ftl.ftl_trim -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:24:32.208 18:22:42 ftl.ftl_trim -- scripts/common.sh@333 -- # local ver1 ver1_l 00:24:32.208 18:22:42 ftl.ftl_trim -- scripts/common.sh@334 -- # local ver2 ver2_l 00:24:32.208 18:22:42 ftl.ftl_trim -- scripts/common.sh@336 -- # IFS=.-: 00:24:32.208 18:22:42 ftl.ftl_trim -- scripts/common.sh@336 -- # read -ra ver1 00:24:32.208 18:22:42 ftl.ftl_trim -- scripts/common.sh@337 -- # IFS=.-: 00:24:32.208 18:22:42 ftl.ftl_trim -- scripts/common.sh@337 -- # read -ra ver2 00:24:32.208 18:22:42 ftl.ftl_trim -- scripts/common.sh@338 -- # local 'op=<' 00:24:32.208 18:22:42 ftl.ftl_trim -- scripts/common.sh@340 -- # ver1_l=2 00:24:32.208 18:22:42 ftl.ftl_trim -- scripts/common.sh@341 -- # ver2_l=1 00:24:32.208 18:22:42 ftl.ftl_trim -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:24:32.208 18:22:42 ftl.ftl_trim -- scripts/common.sh@344 -- # case "$op" in 00:24:32.208 18:22:42 ftl.ftl_trim -- scripts/common.sh@345 -- # : 1 00:24:32.208 18:22:42 ftl.ftl_trim -- scripts/common.sh@364 -- # (( v = 0 )) 00:24:32.208 18:22:42 ftl.ftl_trim -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:24:32.208 18:22:42 ftl.ftl_trim -- scripts/common.sh@365 -- # decimal 1 00:24:32.208 18:22:42 ftl.ftl_trim -- scripts/common.sh@353 -- # local d=1 00:24:32.208 18:22:42 ftl.ftl_trim -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:24:32.208 18:22:42 ftl.ftl_trim -- scripts/common.sh@355 -- # echo 1 00:24:32.208 18:22:42 ftl.ftl_trim -- scripts/common.sh@365 -- # ver1[v]=1 00:24:32.208 18:22:42 ftl.ftl_trim -- scripts/common.sh@366 -- # decimal 2 00:24:32.208 18:22:42 ftl.ftl_trim -- scripts/common.sh@353 -- # local d=2 00:24:32.208 18:22:42 ftl.ftl_trim -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:24:32.208 18:22:42 ftl.ftl_trim -- scripts/common.sh@355 -- # echo 2 00:24:32.209 18:22:42 ftl.ftl_trim -- scripts/common.sh@366 -- # ver2[v]=2 00:24:32.209 18:22:42 ftl.ftl_trim -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:24:32.209 18:22:42 ftl.ftl_trim -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:24:32.209 18:22:42 ftl.ftl_trim -- scripts/common.sh@368 -- # return 0 00:24:32.209 18:22:42 ftl.ftl_trim -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:24:32.209 18:22:42 ftl.ftl_trim -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:24:32.209 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:24:32.209 --rc genhtml_branch_coverage=1 00:24:32.209 --rc genhtml_function_coverage=1 00:24:32.209 --rc genhtml_legend=1 00:24:32.209 --rc geninfo_all_blocks=1 00:24:32.209 --rc geninfo_unexecuted_blocks=1 00:24:32.209 00:24:32.209 ' 00:24:32.209 18:22:42 ftl.ftl_trim -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:24:32.209 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:24:32.209 --rc genhtml_branch_coverage=1 00:24:32.209 --rc genhtml_function_coverage=1 00:24:32.209 --rc genhtml_legend=1 00:24:32.209 --rc geninfo_all_blocks=1 00:24:32.209 --rc geninfo_unexecuted_blocks=1 00:24:32.209 00:24:32.209 ' 00:24:32.209 18:22:42 ftl.ftl_trim -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:24:32.209 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:24:32.209 --rc genhtml_branch_coverage=1 00:24:32.209 --rc genhtml_function_coverage=1 00:24:32.209 --rc genhtml_legend=1 00:24:32.209 --rc geninfo_all_blocks=1 00:24:32.209 --rc geninfo_unexecuted_blocks=1 00:24:32.209 00:24:32.209 ' 00:24:32.209 18:22:42 ftl.ftl_trim -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:24:32.209 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:24:32.209 --rc genhtml_branch_coverage=1 00:24:32.209 --rc genhtml_function_coverage=1 00:24:32.209 --rc genhtml_legend=1 00:24:32.209 --rc geninfo_all_blocks=1 00:24:32.209 --rc geninfo_unexecuted_blocks=1 00:24:32.209 00:24:32.209 ' 00:24:32.209 18:22:42 ftl.ftl_trim -- ftl/trim.sh@10 -- # source /home/vagrant/spdk_repo/spdk/test/ftl/common.sh 00:24:32.209 18:22:42 ftl.ftl_trim -- ftl/common.sh@8 -- # dirname /home/vagrant/spdk_repo/spdk/test/ftl/trim.sh 00:24:32.209 18:22:42 ftl.ftl_trim -- ftl/common.sh@8 -- # readlink -f /home/vagrant/spdk_repo/spdk/test/ftl 00:24:32.209 18:22:42 ftl.ftl_trim -- ftl/common.sh@8 -- # testdir=/home/vagrant/spdk_repo/spdk/test/ftl 00:24:32.209 18:22:42 ftl.ftl_trim -- ftl/common.sh@9 -- # readlink -f /home/vagrant/spdk_repo/spdk/test/ftl/../.. 00:24:32.209 18:22:42 ftl.ftl_trim -- ftl/common.sh@9 -- # rootdir=/home/vagrant/spdk_repo/spdk 00:24:32.209 18:22:42 ftl.ftl_trim -- ftl/common.sh@10 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:24:32.209 18:22:42 ftl.ftl_trim -- ftl/common.sh@12 -- # export 'ftl_tgt_core_mask=[0]' 00:24:32.209 18:22:42 ftl.ftl_trim -- ftl/common.sh@12 -- # ftl_tgt_core_mask='[0]' 00:24:32.209 18:22:42 ftl.ftl_trim -- ftl/common.sh@14 -- # export spdk_tgt_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:24:32.209 18:22:42 ftl.ftl_trim -- ftl/common.sh@14 -- # spdk_tgt_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:24:32.209 18:22:42 ftl.ftl_trim -- ftl/common.sh@15 -- # export 'spdk_tgt_cpumask=[0]' 00:24:32.209 18:22:42 ftl.ftl_trim -- ftl/common.sh@15 -- # spdk_tgt_cpumask='[0]' 00:24:32.209 18:22:42 ftl.ftl_trim -- ftl/common.sh@16 -- # export spdk_tgt_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json 00:24:32.209 18:22:42 ftl.ftl_trim -- ftl/common.sh@16 -- # spdk_tgt_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json 00:24:32.209 18:22:42 ftl.ftl_trim -- ftl/common.sh@17 -- # export spdk_tgt_pid= 00:24:32.209 18:22:42 ftl.ftl_trim -- ftl/common.sh@17 -- # spdk_tgt_pid= 00:24:32.209 18:22:42 ftl.ftl_trim -- ftl/common.sh@19 -- # export spdk_ini_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:24:32.209 18:22:42 ftl.ftl_trim -- ftl/common.sh@19 -- # spdk_ini_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:24:32.209 18:22:42 ftl.ftl_trim -- ftl/common.sh@20 -- # export 'spdk_ini_cpumask=[1]' 00:24:32.209 18:22:42 ftl.ftl_trim -- ftl/common.sh@20 -- # spdk_ini_cpumask='[1]' 00:24:32.209 18:22:42 ftl.ftl_trim -- ftl/common.sh@21 -- # export spdk_ini_rpc=/var/tmp/spdk.tgt.sock 00:24:32.209 18:22:42 ftl.ftl_trim -- ftl/common.sh@21 -- # spdk_ini_rpc=/var/tmp/spdk.tgt.sock 00:24:32.209 18:22:42 ftl.ftl_trim -- ftl/common.sh@22 -- # export spdk_ini_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json 00:24:32.209 18:22:42 ftl.ftl_trim -- ftl/common.sh@22 -- # spdk_ini_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json 00:24:32.209 18:22:42 ftl.ftl_trim -- ftl/common.sh@23 -- # export spdk_ini_pid= 00:24:32.209 18:22:42 ftl.ftl_trim -- ftl/common.sh@23 -- # spdk_ini_pid= 00:24:32.209 18:22:42 ftl.ftl_trim -- ftl/common.sh@25 -- # export spdk_dd_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:24:32.209 18:22:42 ftl.ftl_trim -- ftl/common.sh@25 -- # spdk_dd_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:24:32.209 18:22:42 ftl.ftl_trim -- ftl/trim.sh@12 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:24:32.209 18:22:42 ftl.ftl_trim -- ftl/trim.sh@23 -- # device=0000:00:11.0 00:24:32.209 18:22:42 ftl.ftl_trim -- ftl/trim.sh@24 -- # cache_device=0000:00:10.0 00:24:32.209 18:22:42 ftl.ftl_trim -- ftl/trim.sh@25 -- # timeout=240 00:24:32.209 18:22:42 ftl.ftl_trim -- ftl/trim.sh@26 -- # data_size_in_blocks=65536 00:24:32.209 18:22:42 ftl.ftl_trim -- ftl/trim.sh@27 -- # unmap_size_in_blocks=1024 00:24:32.209 18:22:42 ftl.ftl_trim -- ftl/trim.sh@29 -- # [[ y != y ]] 00:24:32.209 18:22:42 ftl.ftl_trim -- ftl/trim.sh@34 -- # export FTL_BDEV_NAME=ftl0 00:24:32.209 18:22:42 ftl.ftl_trim -- ftl/trim.sh@34 -- # FTL_BDEV_NAME=ftl0 00:24:32.209 18:22:42 ftl.ftl_trim -- ftl/trim.sh@35 -- # export FTL_JSON_CONF=/home/vagrant/spdk_repo/spdk/test/ftl/config/ftl.json 00:24:32.209 18:22:42 ftl.ftl_trim -- ftl/trim.sh@35 -- # FTL_JSON_CONF=/home/vagrant/spdk_repo/spdk/test/ftl/config/ftl.json 00:24:32.209 18:22:42 ftl.ftl_trim -- ftl/trim.sh@37 -- # trap 'fio_kill; exit 1' SIGINT SIGTERM EXIT 00:24:32.209 18:22:42 ftl.ftl_trim -- ftl/trim.sh@40 -- # svcpid=78133 00:24:32.209 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:24:32.209 18:22:42 ftl.ftl_trim -- ftl/trim.sh@41 -- # waitforlisten 78133 00:24:32.209 18:22:42 ftl.ftl_trim -- ftl/trim.sh@39 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x7 00:24:32.209 18:22:42 ftl.ftl_trim -- common/autotest_common.sh@835 -- # '[' -z 78133 ']' 00:24:32.209 18:22:42 ftl.ftl_trim -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:24:32.209 18:22:42 ftl.ftl_trim -- common/autotest_common.sh@840 -- # local max_retries=100 00:24:32.209 18:22:42 ftl.ftl_trim -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:24:32.209 18:22:42 ftl.ftl_trim -- common/autotest_common.sh@844 -- # xtrace_disable 00:24:32.209 18:22:42 ftl.ftl_trim -- common/autotest_common.sh@10 -- # set +x 00:24:32.209 [2024-12-06 18:22:42.700419] Starting SPDK v25.01-pre git sha1 1148849d6 / DPDK 24.03.0 initialization... 00:24:32.209 [2024-12-06 18:22:42.700725] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x7 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid78133 ] 00:24:32.473 [2024-12-06 18:22:42.885609] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:24:32.473 [2024-12-06 18:22:43.009052] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:24:32.473 [2024-12-06 18:22:43.009209] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:24:32.473 [2024-12-06 18:22:43.009247] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:24:33.412 18:22:43 ftl.ftl_trim -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:24:33.412 18:22:43 ftl.ftl_trim -- common/autotest_common.sh@868 -- # return 0 00:24:33.412 18:22:43 ftl.ftl_trim -- ftl/trim.sh@43 -- # create_base_bdev nvme0 0000:00:11.0 103424 00:24:33.412 18:22:43 ftl.ftl_trim -- ftl/common.sh@54 -- # local name=nvme0 00:24:33.412 18:22:43 ftl.ftl_trim -- ftl/common.sh@55 -- # local base_bdf=0000:00:11.0 00:24:33.412 18:22:43 ftl.ftl_trim -- ftl/common.sh@56 -- # local size=103424 00:24:33.412 18:22:43 ftl.ftl_trim -- ftl/common.sh@59 -- # local base_bdev 00:24:33.412 18:22:43 ftl.ftl_trim -- ftl/common.sh@60 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_nvme_attach_controller -b nvme0 -t PCIe -a 0000:00:11.0 00:24:33.670 18:22:44 ftl.ftl_trim -- ftl/common.sh@60 -- # base_bdev=nvme0n1 00:24:33.670 18:22:44 ftl.ftl_trim -- ftl/common.sh@62 -- # local base_size 00:24:33.670 18:22:44 ftl.ftl_trim -- ftl/common.sh@63 -- # get_bdev_size nvme0n1 00:24:33.671 18:22:44 ftl.ftl_trim -- common/autotest_common.sh@1382 -- # local bdev_name=nvme0n1 00:24:33.671 18:22:44 ftl.ftl_trim -- common/autotest_common.sh@1383 -- # local bdev_info 00:24:33.671 18:22:44 ftl.ftl_trim -- common/autotest_common.sh@1384 -- # local bs 00:24:33.671 18:22:44 ftl.ftl_trim -- common/autotest_common.sh@1385 -- # local nb 00:24:33.671 18:22:44 ftl.ftl_trim -- common/autotest_common.sh@1386 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b nvme0n1 00:24:33.930 18:22:44 ftl.ftl_trim -- common/autotest_common.sh@1386 -- # bdev_info='[ 00:24:33.930 { 00:24:33.930 "name": "nvme0n1", 00:24:33.930 "aliases": [ 00:24:33.930 "69ea64d2-77d2-423f-a5d1-70431318089d" 00:24:33.930 ], 00:24:33.930 "product_name": "NVMe disk", 00:24:33.930 "block_size": 4096, 00:24:33.930 "num_blocks": 1310720, 00:24:33.930 "uuid": "69ea64d2-77d2-423f-a5d1-70431318089d", 00:24:33.930 "numa_id": -1, 00:24:33.930 "assigned_rate_limits": { 00:24:33.930 "rw_ios_per_sec": 0, 00:24:33.930 "rw_mbytes_per_sec": 0, 00:24:33.930 "r_mbytes_per_sec": 0, 00:24:33.930 "w_mbytes_per_sec": 0 00:24:33.930 }, 00:24:33.930 "claimed": true, 00:24:33.930 "claim_type": "read_many_write_one", 00:24:33.930 "zoned": false, 00:24:33.930 "supported_io_types": { 00:24:33.930 "read": true, 00:24:33.930 "write": true, 00:24:33.930 "unmap": true, 00:24:33.930 "flush": true, 00:24:33.930 "reset": true, 00:24:33.930 "nvme_admin": true, 00:24:33.930 "nvme_io": true, 00:24:33.930 "nvme_io_md": false, 00:24:33.930 "write_zeroes": true, 00:24:33.930 "zcopy": false, 00:24:33.930 "get_zone_info": false, 00:24:33.930 "zone_management": false, 00:24:33.930 "zone_append": false, 00:24:33.930 "compare": true, 00:24:33.930 "compare_and_write": false, 00:24:33.930 "abort": true, 00:24:33.930 "seek_hole": false, 00:24:33.930 "seek_data": false, 00:24:33.930 "copy": true, 00:24:33.930 "nvme_iov_md": false 00:24:33.930 }, 00:24:33.930 "driver_specific": { 00:24:33.930 "nvme": [ 00:24:33.930 { 00:24:33.930 "pci_address": "0000:00:11.0", 00:24:33.930 "trid": { 00:24:33.930 "trtype": "PCIe", 00:24:33.930 "traddr": "0000:00:11.0" 00:24:33.930 }, 00:24:33.930 "ctrlr_data": { 00:24:33.930 "cntlid": 0, 00:24:33.930 "vendor_id": "0x1b36", 00:24:33.930 "model_number": "QEMU NVMe Ctrl", 00:24:33.930 "serial_number": "12341", 00:24:33.930 "firmware_revision": "8.0.0", 00:24:33.930 "subnqn": "nqn.2019-08.org.qemu:12341", 00:24:33.930 "oacs": { 00:24:33.930 "security": 0, 00:24:33.930 "format": 1, 00:24:33.930 "firmware": 0, 00:24:33.930 "ns_manage": 1 00:24:33.930 }, 00:24:33.930 "multi_ctrlr": false, 00:24:33.930 "ana_reporting": false 00:24:33.930 }, 00:24:33.930 "vs": { 00:24:33.930 "nvme_version": "1.4" 00:24:33.930 }, 00:24:33.930 "ns_data": { 00:24:33.930 "id": 1, 00:24:33.930 "can_share": false 00:24:33.930 } 00:24:33.930 } 00:24:33.930 ], 00:24:33.930 "mp_policy": "active_passive" 00:24:33.930 } 00:24:33.930 } 00:24:33.930 ]' 00:24:33.930 18:22:44 ftl.ftl_trim -- common/autotest_common.sh@1387 -- # jq '.[] .block_size' 00:24:33.930 18:22:44 ftl.ftl_trim -- common/autotest_common.sh@1387 -- # bs=4096 00:24:33.930 18:22:44 ftl.ftl_trim -- common/autotest_common.sh@1388 -- # jq '.[] .num_blocks' 00:24:33.930 18:22:44 ftl.ftl_trim -- common/autotest_common.sh@1388 -- # nb=1310720 00:24:33.930 18:22:44 ftl.ftl_trim -- common/autotest_common.sh@1391 -- # bdev_size=5120 00:24:33.930 18:22:44 ftl.ftl_trim -- common/autotest_common.sh@1392 -- # echo 5120 00:24:33.930 18:22:44 ftl.ftl_trim -- ftl/common.sh@63 -- # base_size=5120 00:24:33.930 18:22:44 ftl.ftl_trim -- ftl/common.sh@64 -- # [[ 103424 -le 5120 ]] 00:24:33.930 18:22:44 ftl.ftl_trim -- ftl/common.sh@67 -- # clear_lvols 00:24:34.189 18:22:44 ftl.ftl_trim -- ftl/common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores 00:24:34.189 18:22:44 ftl.ftl_trim -- ftl/common.sh@28 -- # jq -r '.[] | .uuid' 00:24:34.189 18:22:44 ftl.ftl_trim -- ftl/common.sh@28 -- # stores=2f98123b-bf1a-44e5-a926-f0e26e779530 00:24:34.189 18:22:44 ftl.ftl_trim -- ftl/common.sh@29 -- # for lvs in $stores 00:24:34.189 18:22:44 ftl.ftl_trim -- ftl/common.sh@30 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -u 2f98123b-bf1a-44e5-a926-f0e26e779530 00:24:34.447 18:22:44 ftl.ftl_trim -- ftl/common.sh@68 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create_lvstore nvme0n1 lvs 00:24:34.715 18:22:45 ftl.ftl_trim -- ftl/common.sh@68 -- # lvs=1468977f-4ea3-470d-b9c6-705b1fa7502d 00:24:34.716 18:22:45 ftl.ftl_trim -- ftl/common.sh@69 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create nvme0n1p0 103424 -t -u 1468977f-4ea3-470d-b9c6-705b1fa7502d 00:24:34.977 18:22:45 ftl.ftl_trim -- ftl/trim.sh@43 -- # split_bdev=d3361ec2-4e3d-4576-bfb2-cc4c4f20be16 00:24:34.977 18:22:45 ftl.ftl_trim -- ftl/trim.sh@44 -- # create_nv_cache_bdev nvc0 0000:00:10.0 d3361ec2-4e3d-4576-bfb2-cc4c4f20be16 00:24:34.977 18:22:45 ftl.ftl_trim -- ftl/common.sh@35 -- # local name=nvc0 00:24:34.977 18:22:45 ftl.ftl_trim -- ftl/common.sh@36 -- # local cache_bdf=0000:00:10.0 00:24:34.977 18:22:45 ftl.ftl_trim -- ftl/common.sh@37 -- # local base_bdev=d3361ec2-4e3d-4576-bfb2-cc4c4f20be16 00:24:34.978 18:22:45 ftl.ftl_trim -- ftl/common.sh@38 -- # local cache_size= 00:24:34.978 18:22:45 ftl.ftl_trim -- ftl/common.sh@41 -- # get_bdev_size d3361ec2-4e3d-4576-bfb2-cc4c4f20be16 00:24:34.978 18:22:45 ftl.ftl_trim -- common/autotest_common.sh@1382 -- # local bdev_name=d3361ec2-4e3d-4576-bfb2-cc4c4f20be16 00:24:34.978 18:22:45 ftl.ftl_trim -- common/autotest_common.sh@1383 -- # local bdev_info 00:24:34.978 18:22:45 ftl.ftl_trim -- common/autotest_common.sh@1384 -- # local bs 00:24:34.978 18:22:45 ftl.ftl_trim -- common/autotest_common.sh@1385 -- # local nb 00:24:34.978 18:22:45 ftl.ftl_trim -- common/autotest_common.sh@1386 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b d3361ec2-4e3d-4576-bfb2-cc4c4f20be16 00:24:35.235 18:22:45 ftl.ftl_trim -- common/autotest_common.sh@1386 -- # bdev_info='[ 00:24:35.235 { 00:24:35.235 "name": "d3361ec2-4e3d-4576-bfb2-cc4c4f20be16", 00:24:35.235 "aliases": [ 00:24:35.235 "lvs/nvme0n1p0" 00:24:35.235 ], 00:24:35.235 "product_name": "Logical Volume", 00:24:35.235 "block_size": 4096, 00:24:35.235 "num_blocks": 26476544, 00:24:35.235 "uuid": "d3361ec2-4e3d-4576-bfb2-cc4c4f20be16", 00:24:35.235 "assigned_rate_limits": { 00:24:35.235 "rw_ios_per_sec": 0, 00:24:35.235 "rw_mbytes_per_sec": 0, 00:24:35.235 "r_mbytes_per_sec": 0, 00:24:35.235 "w_mbytes_per_sec": 0 00:24:35.235 }, 00:24:35.235 "claimed": false, 00:24:35.235 "zoned": false, 00:24:35.235 "supported_io_types": { 00:24:35.235 "read": true, 00:24:35.235 "write": true, 00:24:35.235 "unmap": true, 00:24:35.235 "flush": false, 00:24:35.235 "reset": true, 00:24:35.235 "nvme_admin": false, 00:24:35.235 "nvme_io": false, 00:24:35.235 "nvme_io_md": false, 00:24:35.235 "write_zeroes": true, 00:24:35.235 "zcopy": false, 00:24:35.235 "get_zone_info": false, 00:24:35.235 "zone_management": false, 00:24:35.235 "zone_append": false, 00:24:35.235 "compare": false, 00:24:35.235 "compare_and_write": false, 00:24:35.235 "abort": false, 00:24:35.235 "seek_hole": true, 00:24:35.235 "seek_data": true, 00:24:35.235 "copy": false, 00:24:35.235 "nvme_iov_md": false 00:24:35.235 }, 00:24:35.235 "driver_specific": { 00:24:35.235 "lvol": { 00:24:35.235 "lvol_store_uuid": "1468977f-4ea3-470d-b9c6-705b1fa7502d", 00:24:35.235 "base_bdev": "nvme0n1", 00:24:35.235 "thin_provision": true, 00:24:35.235 "num_allocated_clusters": 0, 00:24:35.235 "snapshot": false, 00:24:35.235 "clone": false, 00:24:35.236 "esnap_clone": false 00:24:35.236 } 00:24:35.236 } 00:24:35.236 } 00:24:35.236 ]' 00:24:35.236 18:22:45 ftl.ftl_trim -- common/autotest_common.sh@1387 -- # jq '.[] .block_size' 00:24:35.236 18:22:45 ftl.ftl_trim -- common/autotest_common.sh@1387 -- # bs=4096 00:24:35.236 18:22:45 ftl.ftl_trim -- common/autotest_common.sh@1388 -- # jq '.[] .num_blocks' 00:24:35.236 18:22:45 ftl.ftl_trim -- common/autotest_common.sh@1388 -- # nb=26476544 00:24:35.236 18:22:45 ftl.ftl_trim -- common/autotest_common.sh@1391 -- # bdev_size=103424 00:24:35.236 18:22:45 ftl.ftl_trim -- common/autotest_common.sh@1392 -- # echo 103424 00:24:35.236 18:22:45 ftl.ftl_trim -- ftl/common.sh@41 -- # local base_size=5171 00:24:35.236 18:22:45 ftl.ftl_trim -- ftl/common.sh@44 -- # local nvc_bdev 00:24:35.236 18:22:45 ftl.ftl_trim -- ftl/common.sh@45 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_nvme_attach_controller -b nvc0 -t PCIe -a 0000:00:10.0 00:24:35.494 18:22:45 ftl.ftl_trim -- ftl/common.sh@45 -- # nvc_bdev=nvc0n1 00:24:35.494 18:22:45 ftl.ftl_trim -- ftl/common.sh@47 -- # [[ -z '' ]] 00:24:35.494 18:22:45 ftl.ftl_trim -- ftl/common.sh@48 -- # get_bdev_size d3361ec2-4e3d-4576-bfb2-cc4c4f20be16 00:24:35.494 18:22:45 ftl.ftl_trim -- common/autotest_common.sh@1382 -- # local bdev_name=d3361ec2-4e3d-4576-bfb2-cc4c4f20be16 00:24:35.494 18:22:45 ftl.ftl_trim -- common/autotest_common.sh@1383 -- # local bdev_info 00:24:35.494 18:22:45 ftl.ftl_trim -- common/autotest_common.sh@1384 -- # local bs 00:24:35.494 18:22:45 ftl.ftl_trim -- common/autotest_common.sh@1385 -- # local nb 00:24:35.494 18:22:45 ftl.ftl_trim -- common/autotest_common.sh@1386 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b d3361ec2-4e3d-4576-bfb2-cc4c4f20be16 00:24:35.753 18:22:46 ftl.ftl_trim -- common/autotest_common.sh@1386 -- # bdev_info='[ 00:24:35.753 { 00:24:35.753 "name": "d3361ec2-4e3d-4576-bfb2-cc4c4f20be16", 00:24:35.753 "aliases": [ 00:24:35.753 "lvs/nvme0n1p0" 00:24:35.753 ], 00:24:35.753 "product_name": "Logical Volume", 00:24:35.753 "block_size": 4096, 00:24:35.753 "num_blocks": 26476544, 00:24:35.753 "uuid": "d3361ec2-4e3d-4576-bfb2-cc4c4f20be16", 00:24:35.753 "assigned_rate_limits": { 00:24:35.753 "rw_ios_per_sec": 0, 00:24:35.753 "rw_mbytes_per_sec": 0, 00:24:35.753 "r_mbytes_per_sec": 0, 00:24:35.753 "w_mbytes_per_sec": 0 00:24:35.753 }, 00:24:35.753 "claimed": false, 00:24:35.753 "zoned": false, 00:24:35.753 "supported_io_types": { 00:24:35.753 "read": true, 00:24:35.753 "write": true, 00:24:35.753 "unmap": true, 00:24:35.753 "flush": false, 00:24:35.753 "reset": true, 00:24:35.753 "nvme_admin": false, 00:24:35.753 "nvme_io": false, 00:24:35.753 "nvme_io_md": false, 00:24:35.753 "write_zeroes": true, 00:24:35.753 "zcopy": false, 00:24:35.753 "get_zone_info": false, 00:24:35.753 "zone_management": false, 00:24:35.753 "zone_append": false, 00:24:35.753 "compare": false, 00:24:35.753 "compare_and_write": false, 00:24:35.753 "abort": false, 00:24:35.753 "seek_hole": true, 00:24:35.753 "seek_data": true, 00:24:35.753 "copy": false, 00:24:35.753 "nvme_iov_md": false 00:24:35.753 }, 00:24:35.753 "driver_specific": { 00:24:35.753 "lvol": { 00:24:35.753 "lvol_store_uuid": "1468977f-4ea3-470d-b9c6-705b1fa7502d", 00:24:35.753 "base_bdev": "nvme0n1", 00:24:35.753 "thin_provision": true, 00:24:35.753 "num_allocated_clusters": 0, 00:24:35.753 "snapshot": false, 00:24:35.753 "clone": false, 00:24:35.753 "esnap_clone": false 00:24:35.753 } 00:24:35.753 } 00:24:35.753 } 00:24:35.753 ]' 00:24:35.753 18:22:46 ftl.ftl_trim -- common/autotest_common.sh@1387 -- # jq '.[] .block_size' 00:24:35.753 18:22:46 ftl.ftl_trim -- common/autotest_common.sh@1387 -- # bs=4096 00:24:35.753 18:22:46 ftl.ftl_trim -- common/autotest_common.sh@1388 -- # jq '.[] .num_blocks' 00:24:35.753 18:22:46 ftl.ftl_trim -- common/autotest_common.sh@1388 -- # nb=26476544 00:24:35.753 18:22:46 ftl.ftl_trim -- common/autotest_common.sh@1391 -- # bdev_size=103424 00:24:35.753 18:22:46 ftl.ftl_trim -- common/autotest_common.sh@1392 -- # echo 103424 00:24:35.753 18:22:46 ftl.ftl_trim -- ftl/common.sh@48 -- # cache_size=5171 00:24:35.753 18:22:46 ftl.ftl_trim -- ftl/common.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_split_create nvc0n1 -s 5171 1 00:24:36.012 18:22:46 ftl.ftl_trim -- ftl/trim.sh@44 -- # nv_cache=nvc0n1p0 00:24:36.012 18:22:46 ftl.ftl_trim -- ftl/trim.sh@46 -- # l2p_percentage=60 00:24:36.012 18:22:46 ftl.ftl_trim -- ftl/trim.sh@47 -- # get_bdev_size d3361ec2-4e3d-4576-bfb2-cc4c4f20be16 00:24:36.012 18:22:46 ftl.ftl_trim -- common/autotest_common.sh@1382 -- # local bdev_name=d3361ec2-4e3d-4576-bfb2-cc4c4f20be16 00:24:36.012 18:22:46 ftl.ftl_trim -- common/autotest_common.sh@1383 -- # local bdev_info 00:24:36.012 18:22:46 ftl.ftl_trim -- common/autotest_common.sh@1384 -- # local bs 00:24:36.012 18:22:46 ftl.ftl_trim -- common/autotest_common.sh@1385 -- # local nb 00:24:36.012 18:22:46 ftl.ftl_trim -- common/autotest_common.sh@1386 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b d3361ec2-4e3d-4576-bfb2-cc4c4f20be16 00:24:36.271 18:22:46 ftl.ftl_trim -- common/autotest_common.sh@1386 -- # bdev_info='[ 00:24:36.271 { 00:24:36.271 "name": "d3361ec2-4e3d-4576-bfb2-cc4c4f20be16", 00:24:36.271 "aliases": [ 00:24:36.271 "lvs/nvme0n1p0" 00:24:36.271 ], 00:24:36.271 "product_name": "Logical Volume", 00:24:36.271 "block_size": 4096, 00:24:36.271 "num_blocks": 26476544, 00:24:36.271 "uuid": "d3361ec2-4e3d-4576-bfb2-cc4c4f20be16", 00:24:36.271 "assigned_rate_limits": { 00:24:36.271 "rw_ios_per_sec": 0, 00:24:36.271 "rw_mbytes_per_sec": 0, 00:24:36.271 "r_mbytes_per_sec": 0, 00:24:36.271 "w_mbytes_per_sec": 0 00:24:36.271 }, 00:24:36.271 "claimed": false, 00:24:36.271 "zoned": false, 00:24:36.271 "supported_io_types": { 00:24:36.271 "read": true, 00:24:36.271 "write": true, 00:24:36.271 "unmap": true, 00:24:36.271 "flush": false, 00:24:36.271 "reset": true, 00:24:36.271 "nvme_admin": false, 00:24:36.271 "nvme_io": false, 00:24:36.271 "nvme_io_md": false, 00:24:36.271 "write_zeroes": true, 00:24:36.271 "zcopy": false, 00:24:36.271 "get_zone_info": false, 00:24:36.271 "zone_management": false, 00:24:36.271 "zone_append": false, 00:24:36.271 "compare": false, 00:24:36.271 "compare_and_write": false, 00:24:36.271 "abort": false, 00:24:36.271 "seek_hole": true, 00:24:36.271 "seek_data": true, 00:24:36.271 "copy": false, 00:24:36.271 "nvme_iov_md": false 00:24:36.271 }, 00:24:36.271 "driver_specific": { 00:24:36.271 "lvol": { 00:24:36.271 "lvol_store_uuid": "1468977f-4ea3-470d-b9c6-705b1fa7502d", 00:24:36.271 "base_bdev": "nvme0n1", 00:24:36.271 "thin_provision": true, 00:24:36.271 "num_allocated_clusters": 0, 00:24:36.271 "snapshot": false, 00:24:36.271 "clone": false, 00:24:36.271 "esnap_clone": false 00:24:36.271 } 00:24:36.271 } 00:24:36.271 } 00:24:36.271 ]' 00:24:36.271 18:22:46 ftl.ftl_trim -- common/autotest_common.sh@1387 -- # jq '.[] .block_size' 00:24:36.271 18:22:46 ftl.ftl_trim -- common/autotest_common.sh@1387 -- # bs=4096 00:24:36.271 18:22:46 ftl.ftl_trim -- common/autotest_common.sh@1388 -- # jq '.[] .num_blocks' 00:24:36.271 18:22:46 ftl.ftl_trim -- common/autotest_common.sh@1388 -- # nb=26476544 00:24:36.271 18:22:46 ftl.ftl_trim -- common/autotest_common.sh@1391 -- # bdev_size=103424 00:24:36.271 18:22:46 ftl.ftl_trim -- common/autotest_common.sh@1392 -- # echo 103424 00:24:36.271 18:22:46 ftl.ftl_trim -- ftl/trim.sh@47 -- # l2p_dram_size_mb=60 00:24:36.271 18:22:46 ftl.ftl_trim -- ftl/trim.sh@49 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -t 240 bdev_ftl_create -b ftl0 -d d3361ec2-4e3d-4576-bfb2-cc4c4f20be16 -c nvc0n1p0 --core_mask 7 --l2p_dram_limit 60 --overprovisioning 10 00:24:36.532 [2024-12-06 18:22:46.932339] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:36.532 [2024-12-06 18:22:46.932399] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Check configuration 00:24:36.532 [2024-12-06 18:22:46.932422] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.004 ms 00:24:36.532 [2024-12-06 18:22:46.932433] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:36.532 [2024-12-06 18:22:46.935819] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:36.532 [2024-12-06 18:22:46.935982] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:24:36.532 [2024-12-06 18:22:46.936009] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.359 ms 00:24:36.532 [2024-12-06 18:22:46.936020] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:36.532 [2024-12-06 18:22:46.936227] mngt/ftl_mngt_bdev.c: 196:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using nvc0n1p0 as write buffer cache 00:24:36.532 [2024-12-06 18:22:46.937237] mngt/ftl_mngt_bdev.c: 236:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using bdev as NV Cache device 00:24:36.532 [2024-12-06 18:22:46.937278] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:36.532 [2024-12-06 18:22:46.937290] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:24:36.532 [2024-12-06 18:22:46.937304] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.062 ms 00:24:36.532 [2024-12-06 18:22:46.937315] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:36.532 [2024-12-06 18:22:46.937596] mngt/ftl_mngt_md.c: 570:ftl_mngt_superblock_init: *NOTICE*: [FTL][ftl0] Create new FTL, UUID 0182d045-795a-443b-ad13-478c5d3e8b79 00:24:36.532 [2024-12-06 18:22:46.939066] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:36.532 [2024-12-06 18:22:46.939214] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Default-initialize superblock 00:24:36.532 [2024-12-06 18:22:46.939234] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.019 ms 00:24:36.532 [2024-12-06 18:22:46.939247] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:36.532 [2024-12-06 18:22:46.946743] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:36.532 [2024-12-06 18:22:46.946784] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:24:36.532 [2024-12-06 18:22:46.946800] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 7.404 ms 00:24:36.532 [2024-12-06 18:22:46.946813] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:36.532 [2024-12-06 18:22:46.946967] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:36.532 [2024-12-06 18:22:46.946985] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:24:36.532 [2024-12-06 18:22:46.946997] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.083 ms 00:24:36.532 [2024-12-06 18:22:46.947014] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:36.532 [2024-12-06 18:22:46.947053] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:36.532 [2024-12-06 18:22:46.947067] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Register IO device 00:24:36.532 [2024-12-06 18:22:46.947078] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.008 ms 00:24:36.532 [2024-12-06 18:22:46.947093] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:36.532 [2024-12-06 18:22:46.947133] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on ftl_core_thread 00:24:36.532 [2024-12-06 18:22:46.952056] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:36.532 [2024-12-06 18:22:46.952096] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:24:36.532 [2024-12-06 18:22:46.952111] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.936 ms 00:24:36.532 [2024-12-06 18:22:46.952121] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:36.532 [2024-12-06 18:22:46.952203] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:36.532 [2024-12-06 18:22:46.952232] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Decorate bands 00:24:36.532 [2024-12-06 18:22:46.952246] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.010 ms 00:24:36.532 [2024-12-06 18:22:46.952256] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:36.532 [2024-12-06 18:22:46.952312] ftl_layout.c: 613:ftl_layout_setup: *NOTICE*: [FTL][ftl0] FTL layout setup mode 1 00:24:36.532 [2024-12-06 18:22:46.952447] upgrade/ftl_sb_v5.c: 92:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob store 0x150 bytes 00:24:36.532 [2024-12-06 18:22:46.952467] upgrade/ftl_sb_v5.c: 101:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] base layout blob store 0x48 bytes 00:24:36.532 [2024-12-06 18:22:46.952481] upgrade/ftl_sb_v5.c: 109:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] layout blob store 0x190 bytes 00:24:36.532 [2024-12-06 18:22:46.952497] ftl_layout.c: 685:ftl_layout_setup: *NOTICE*: [FTL][ftl0] Base device capacity: 103424.00 MiB 00:24:36.532 [2024-12-06 18:22:46.952509] ftl_layout.c: 687:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache device capacity: 5171.00 MiB 00:24:36.532 [2024-12-06 18:22:46.952524] ftl_layout.c: 689:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P entries: 23592960 00:24:36.532 [2024-12-06 18:22:46.952534] ftl_layout.c: 690:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P address size: 4 00:24:36.532 [2024-12-06 18:22:46.952547] ftl_layout.c: 691:ftl_layout_setup: *NOTICE*: [FTL][ftl0] P2L checkpoint pages: 2048 00:24:36.532 [2024-12-06 18:22:46.952559] ftl_layout.c: 692:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache chunk count 5 00:24:36.532 [2024-12-06 18:22:46.952573] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:36.532 [2024-12-06 18:22:46.952583] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize layout 00:24:36.532 [2024-12-06 18:22:46.952596] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.263 ms 00:24:36.532 [2024-12-06 18:22:46.952606] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:36.532 [2024-12-06 18:22:46.952694] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:36.532 [2024-12-06 18:22:46.952704] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Verify layout 00:24:36.532 [2024-12-06 18:22:46.952717] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.057 ms 00:24:36.532 [2024-12-06 18:22:46.952727] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:36.532 [2024-12-06 18:22:46.952849] ftl_layout.c: 768:ftl_layout_dump: *NOTICE*: [FTL][ftl0] NV cache layout: 00:24:36.532 [2024-12-06 18:22:46.952862] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb 00:24:36.532 [2024-12-06 18:22:46.952874] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:24:36.532 [2024-12-06 18:22:46.952884] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:24:36.532 [2024-12-06 18:22:46.952897] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region l2p 00:24:36.532 [2024-12-06 18:22:46.952906] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.12 MiB 00:24:36.532 [2024-12-06 18:22:46.952918] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 90.00 MiB 00:24:36.532 [2024-12-06 18:22:46.952927] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md 00:24:36.532 [2024-12-06 18:22:46.952938] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 90.12 MiB 00:24:36.532 [2024-12-06 18:22:46.952947] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:24:36.532 [2024-12-06 18:22:46.952961] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md_mirror 00:24:36.532 [2024-12-06 18:22:46.952970] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 90.62 MiB 00:24:36.532 [2024-12-06 18:22:46.952981] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:24:36.532 [2024-12-06 18:22:46.952991] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md 00:24:36.532 [2024-12-06 18:22:46.953002] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.88 MiB 00:24:36.532 [2024-12-06 18:22:46.953011] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:24:36.532 [2024-12-06 18:22:46.953026] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md_mirror 00:24:36.532 [2024-12-06 18:22:46.953035] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 124.00 MiB 00:24:36.532 [2024-12-06 18:22:46.953047] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:24:36.532 [2024-12-06 18:22:46.953056] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l0 00:24:36.532 [2024-12-06 18:22:46.953067] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 91.12 MiB 00:24:36.532 [2024-12-06 18:22:46.953076] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:24:36.532 [2024-12-06 18:22:46.953087] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l1 00:24:36.532 [2024-12-06 18:22:46.953097] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 99.12 MiB 00:24:36.532 [2024-12-06 18:22:46.953108] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:24:36.532 [2024-12-06 18:22:46.953117] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l2 00:24:36.532 [2024-12-06 18:22:46.953128] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 107.12 MiB 00:24:36.532 [2024-12-06 18:22:46.953137] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:24:36.532 [2024-12-06 18:22:46.953148] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l3 00:24:36.532 [2024-12-06 18:22:46.953158] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 115.12 MiB 00:24:36.532 [2024-12-06 18:22:46.953169] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:24:36.532 [2024-12-06 18:22:46.953178] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md 00:24:36.532 [2024-12-06 18:22:46.953191] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.12 MiB 00:24:36.532 [2024-12-06 18:22:46.953201] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:24:36.532 [2024-12-06 18:22:46.953212] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md_mirror 00:24:36.532 [2024-12-06 18:22:46.953222] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.38 MiB 00:24:36.532 [2024-12-06 18:22:46.953235] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:24:36.532 [2024-12-06 18:22:46.953244] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log 00:24:36.533 [2024-12-06 18:22:46.953256] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.62 MiB 00:24:36.533 [2024-12-06 18:22:46.953275] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:24:36.533 [2024-12-06 18:22:46.953287] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log_mirror 00:24:36.533 [2024-12-06 18:22:46.953296] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.75 MiB 00:24:36.533 [2024-12-06 18:22:46.953308] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:24:36.533 [2024-12-06 18:22:46.953317] ftl_layout.c: 775:ftl_layout_dump: *NOTICE*: [FTL][ftl0] Base device layout: 00:24:36.533 [2024-12-06 18:22:46.953330] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb_mirror 00:24:36.533 [2024-12-06 18:22:46.953340] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:24:36.533 [2024-12-06 18:22:46.953352] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:24:36.533 [2024-12-06 18:22:46.953362] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region vmap 00:24:36.533 [2024-12-06 18:22:46.953376] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 102400.25 MiB 00:24:36.533 [2024-12-06 18:22:46.953385] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 3.38 MiB 00:24:36.533 [2024-12-06 18:22:46.953397] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region data_btm 00:24:36.533 [2024-12-06 18:22:46.953406] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.25 MiB 00:24:36.533 [2024-12-06 18:22:46.953417] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 102400.00 MiB 00:24:36.533 [2024-12-06 18:22:46.953428] upgrade/ftl_sb_v5.c: 408:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - nvc: 00:24:36.533 [2024-12-06 18:22:46.953443] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x0 ver:5 blk_offs:0x0 blk_sz:0x20 00:24:36.533 [2024-12-06 18:22:46.953456] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x2 ver:0 blk_offs:0x20 blk_sz:0x5a00 00:24:36.533 [2024-12-06 18:22:46.953469] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x3 ver:2 blk_offs:0x5a20 blk_sz:0x80 00:24:36.533 [2024-12-06 18:22:46.953479] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x4 ver:2 blk_offs:0x5aa0 blk_sz:0x80 00:24:36.533 [2024-12-06 18:22:46.953492] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xa ver:2 blk_offs:0x5b20 blk_sz:0x800 00:24:36.533 [2024-12-06 18:22:46.953502] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xb ver:2 blk_offs:0x6320 blk_sz:0x800 00:24:36.533 [2024-12-06 18:22:46.953514] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xc ver:2 blk_offs:0x6b20 blk_sz:0x800 00:24:36.533 [2024-12-06 18:22:46.953525] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xd ver:2 blk_offs:0x7320 blk_sz:0x800 00:24:36.533 [2024-12-06 18:22:46.953539] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xe ver:0 blk_offs:0x7b20 blk_sz:0x40 00:24:36.533 [2024-12-06 18:22:46.953549] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xf ver:0 blk_offs:0x7b60 blk_sz:0x40 00:24:36.533 [2024-12-06 18:22:46.953564] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x10 ver:1 blk_offs:0x7ba0 blk_sz:0x20 00:24:36.533 [2024-12-06 18:22:46.953575] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x11 ver:1 blk_offs:0x7bc0 blk_sz:0x20 00:24:36.533 [2024-12-06 18:22:46.953587] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x6 ver:2 blk_offs:0x7be0 blk_sz:0x20 00:24:36.533 [2024-12-06 18:22:46.953598] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x7 ver:2 blk_offs:0x7c00 blk_sz:0x20 00:24:36.533 [2024-12-06 18:22:46.953611] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x7c20 blk_sz:0x13b6e0 00:24:36.533 [2024-12-06 18:22:46.953621] upgrade/ftl_sb_v5.c: 422:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - base dev: 00:24:36.533 [2024-12-06 18:22:46.953638] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x1 ver:5 blk_offs:0x0 blk_sz:0x20 00:24:36.533 [2024-12-06 18:22:46.953650] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x20 blk_sz:0x20 00:24:36.533 [2024-12-06 18:22:46.953663] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x9 ver:0 blk_offs:0x40 blk_sz:0x1900000 00:24:36.533 [2024-12-06 18:22:46.953673] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x5 ver:0 blk_offs:0x1900040 blk_sz:0x360 00:24:36.533 [2024-12-06 18:22:46.953686] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x19003a0 blk_sz:0x3fc60 00:24:36.533 [2024-12-06 18:22:46.953696] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:36.533 [2024-12-06 18:22:46.953709] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Layout upgrade 00:24:36.533 [2024-12-06 18:22:46.953720] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.905 ms 00:24:36.533 [2024-12-06 18:22:46.953732] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:36.533 [2024-12-06 18:22:46.953811] mngt/ftl_mngt_misc.c: 165:ftl_mngt_scrub_nv_cache: *NOTICE*: [FTL][ftl0] NV cache data region needs scrubbing, this may take a while. 00:24:36.533 [2024-12-06 18:22:46.953836] mngt/ftl_mngt_misc.c: 166:ftl_mngt_scrub_nv_cache: *NOTICE*: [FTL][ftl0] Scrubbing 5 chunks 00:24:39.820 [2024-12-06 18:22:50.295502] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:39.820 [2024-12-06 18:22:50.295573] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Scrub NV cache 00:24:39.820 [2024-12-06 18:22:50.295592] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3347.114 ms 00:24:39.820 [2024-12-06 18:22:50.295605] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:39.820 [2024-12-06 18:22:50.334292] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:39.820 [2024-12-06 18:22:50.334351] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:24:39.820 [2024-12-06 18:22:50.334369] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 38.384 ms 00:24:39.820 [2024-12-06 18:22:50.334390] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:39.820 [2024-12-06 18:22:50.334536] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:39.820 [2024-12-06 18:22:50.334552] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize band addresses 00:24:39.820 [2024-12-06 18:22:50.334583] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.055 ms 00:24:39.820 [2024-12-06 18:22:50.334601] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:39.820 [2024-12-06 18:22:50.392509] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:39.820 [2024-12-06 18:22:50.392560] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:24:39.820 [2024-12-06 18:22:50.392576] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 57.966 ms 00:24:39.820 [2024-12-06 18:22:50.392590] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:39.820 [2024-12-06 18:22:50.392702] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:39.820 [2024-12-06 18:22:50.392718] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:24:39.820 [2024-12-06 18:22:50.392730] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.003 ms 00:24:39.820 [2024-12-06 18:22:50.392743] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:39.820 [2024-12-06 18:22:50.393181] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:39.820 [2024-12-06 18:22:50.393201] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:24:39.820 [2024-12-06 18:22:50.393212] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.406 ms 00:24:39.820 [2024-12-06 18:22:50.393224] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:39.820 [2024-12-06 18:22:50.393350] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:39.820 [2024-12-06 18:22:50.393365] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:24:39.820 [2024-12-06 18:22:50.393391] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.092 ms 00:24:39.820 [2024-12-06 18:22:50.393407] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:40.078 [2024-12-06 18:22:50.415706] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:40.078 [2024-12-06 18:22:50.415911] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:24:40.078 [2024-12-06 18:22:50.415933] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 22.299 ms 00:24:40.078 [2024-12-06 18:22:50.415947] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:40.078 [2024-12-06 18:22:50.429023] ftl_l2p_cache.c: 458:ftl_l2p_cache_init: *NOTICE*: l2p maximum resident size is: 59 (of 60) MiB 00:24:40.078 [2024-12-06 18:22:50.445374] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:40.078 [2024-12-06 18:22:50.445423] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize L2P 00:24:40.078 [2024-12-06 18:22:50.445441] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 29.341 ms 00:24:40.078 [2024-12-06 18:22:50.445452] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:40.078 [2024-12-06 18:22:50.546291] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:40.078 [2024-12-06 18:22:50.546516] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Clear L2P 00:24:40.078 [2024-12-06 18:22:50.546547] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 100.891 ms 00:24:40.078 [2024-12-06 18:22:50.546558] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:40.078 [2024-12-06 18:22:50.546818] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:40.078 [2024-12-06 18:22:50.546833] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize band initialization 00:24:40.078 [2024-12-06 18:22:50.546851] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.143 ms 00:24:40.078 [2024-12-06 18:22:50.546861] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:40.078 [2024-12-06 18:22:50.583103] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:40.078 [2024-12-06 18:22:50.583144] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Save initial band info metadata 00:24:40.078 [2024-12-06 18:22:50.583163] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 36.260 ms 00:24:40.078 [2024-12-06 18:22:50.583173] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:40.078 [2024-12-06 18:22:50.618740] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:40.078 [2024-12-06 18:22:50.618778] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Save initial chunk info metadata 00:24:40.078 [2024-12-06 18:22:50.618795] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 35.530 ms 00:24:40.078 [2024-12-06 18:22:50.618806] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:40.078 [2024-12-06 18:22:50.619594] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:40.078 [2024-12-06 18:22:50.619620] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize P2L checkpointing 00:24:40.078 [2024-12-06 18:22:50.619634] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.704 ms 00:24:40.078 [2024-12-06 18:22:50.619644] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:40.336 [2024-12-06 18:22:50.720104] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:40.336 [2024-12-06 18:22:50.720154] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Wipe P2L region 00:24:40.336 [2024-12-06 18:22:50.720177] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 100.579 ms 00:24:40.336 [2024-12-06 18:22:50.720189] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:40.336 [2024-12-06 18:22:50.759317] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:40.336 [2024-12-06 18:22:50.759379] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Clear trim map 00:24:40.336 [2024-12-06 18:22:50.759401] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 39.055 ms 00:24:40.336 [2024-12-06 18:22:50.759412] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:40.336 [2024-12-06 18:22:50.797921] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:40.336 [2024-12-06 18:22:50.797967] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Clear trim log 00:24:40.336 [2024-12-06 18:22:50.797986] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 38.428 ms 00:24:40.336 [2024-12-06 18:22:50.797997] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:40.336 [2024-12-06 18:22:50.834131] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:40.336 [2024-12-06 18:22:50.834301] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL dirty state 00:24:40.336 [2024-12-06 18:22:50.834328] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 36.093 ms 00:24:40.336 [2024-12-06 18:22:50.834339] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:40.336 [2024-12-06 18:22:50.834438] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:40.336 [2024-12-06 18:22:50.834455] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Start core poller 00:24:40.336 [2024-12-06 18:22:50.834472] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.006 ms 00:24:40.336 [2024-12-06 18:22:50.834482] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:40.336 [2024-12-06 18:22:50.834567] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:40.336 [2024-12-06 18:22:50.834578] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize initialization 00:24:40.336 [2024-12-06 18:22:50.834592] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.032 ms 00:24:40.336 [2024-12-06 18:22:50.834606] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:40.336 [2024-12-06 18:22:50.835619] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:24:40.336 { 00:24:40.336 "name": "ftl0", 00:24:40.336 "uuid": "0182d045-795a-443b-ad13-478c5d3e8b79" 00:24:40.336 } 00:24:40.336 [2024-12-06 18:22:50.839925] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL startup', duration = 3909.340 ms, result 0 00:24:40.336 [2024-12-06 18:22:50.840691] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:24:40.336 18:22:50 ftl.ftl_trim -- ftl/trim.sh@51 -- # waitforbdev ftl0 00:24:40.336 18:22:50 ftl.ftl_trim -- common/autotest_common.sh@903 -- # local bdev_name=ftl0 00:24:40.336 18:22:50 ftl.ftl_trim -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:24:40.336 18:22:50 ftl.ftl_trim -- common/autotest_common.sh@905 -- # local i 00:24:40.336 18:22:50 ftl.ftl_trim -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:24:40.336 18:22:50 ftl.ftl_trim -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:24:40.336 18:22:50 ftl.ftl_trim -- common/autotest_common.sh@908 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_wait_for_examine 00:24:40.594 18:22:51 ftl.ftl_trim -- common/autotest_common.sh@910 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b ftl0 -t 2000 00:24:40.852 [ 00:24:40.852 { 00:24:40.852 "name": "ftl0", 00:24:40.852 "aliases": [ 00:24:40.852 "0182d045-795a-443b-ad13-478c5d3e8b79" 00:24:40.852 ], 00:24:40.852 "product_name": "FTL disk", 00:24:40.852 "block_size": 4096, 00:24:40.852 "num_blocks": 23592960, 00:24:40.852 "uuid": "0182d045-795a-443b-ad13-478c5d3e8b79", 00:24:40.852 "assigned_rate_limits": { 00:24:40.852 "rw_ios_per_sec": 0, 00:24:40.852 "rw_mbytes_per_sec": 0, 00:24:40.852 "r_mbytes_per_sec": 0, 00:24:40.852 "w_mbytes_per_sec": 0 00:24:40.852 }, 00:24:40.852 "claimed": false, 00:24:40.852 "zoned": false, 00:24:40.852 "supported_io_types": { 00:24:40.852 "read": true, 00:24:40.852 "write": true, 00:24:40.852 "unmap": true, 00:24:40.852 "flush": true, 00:24:40.852 "reset": false, 00:24:40.852 "nvme_admin": false, 00:24:40.852 "nvme_io": false, 00:24:40.852 "nvme_io_md": false, 00:24:40.852 "write_zeroes": true, 00:24:40.852 "zcopy": false, 00:24:40.852 "get_zone_info": false, 00:24:40.852 "zone_management": false, 00:24:40.852 "zone_append": false, 00:24:40.852 "compare": false, 00:24:40.852 "compare_and_write": false, 00:24:40.852 "abort": false, 00:24:40.852 "seek_hole": false, 00:24:40.852 "seek_data": false, 00:24:40.852 "copy": false, 00:24:40.852 "nvme_iov_md": false 00:24:40.852 }, 00:24:40.852 "driver_specific": { 00:24:40.852 "ftl": { 00:24:40.852 "base_bdev": "d3361ec2-4e3d-4576-bfb2-cc4c4f20be16", 00:24:40.852 "cache": "nvc0n1p0" 00:24:40.852 } 00:24:40.852 } 00:24:40.852 } 00:24:40.852 ] 00:24:40.852 18:22:51 ftl.ftl_trim -- common/autotest_common.sh@911 -- # return 0 00:24:40.852 18:22:51 ftl.ftl_trim -- ftl/trim.sh@54 -- # echo '{"subsystems": [' 00:24:40.852 18:22:51 ftl.ftl_trim -- ftl/trim.sh@55 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py save_subsystem_config -n bdev 00:24:41.110 18:22:51 ftl.ftl_trim -- ftl/trim.sh@56 -- # echo ']}' 00:24:41.110 18:22:51 ftl.ftl_trim -- ftl/trim.sh@59 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b ftl0 00:24:41.369 18:22:51 ftl.ftl_trim -- ftl/trim.sh@59 -- # bdev_info='[ 00:24:41.369 { 00:24:41.370 "name": "ftl0", 00:24:41.370 "aliases": [ 00:24:41.370 "0182d045-795a-443b-ad13-478c5d3e8b79" 00:24:41.370 ], 00:24:41.370 "product_name": "FTL disk", 00:24:41.370 "block_size": 4096, 00:24:41.370 "num_blocks": 23592960, 00:24:41.370 "uuid": "0182d045-795a-443b-ad13-478c5d3e8b79", 00:24:41.370 "assigned_rate_limits": { 00:24:41.370 "rw_ios_per_sec": 0, 00:24:41.370 "rw_mbytes_per_sec": 0, 00:24:41.370 "r_mbytes_per_sec": 0, 00:24:41.370 "w_mbytes_per_sec": 0 00:24:41.370 }, 00:24:41.370 "claimed": false, 00:24:41.370 "zoned": false, 00:24:41.370 "supported_io_types": { 00:24:41.370 "read": true, 00:24:41.370 "write": true, 00:24:41.370 "unmap": true, 00:24:41.370 "flush": true, 00:24:41.370 "reset": false, 00:24:41.370 "nvme_admin": false, 00:24:41.370 "nvme_io": false, 00:24:41.370 "nvme_io_md": false, 00:24:41.370 "write_zeroes": true, 00:24:41.370 "zcopy": false, 00:24:41.370 "get_zone_info": false, 00:24:41.370 "zone_management": false, 00:24:41.370 "zone_append": false, 00:24:41.370 "compare": false, 00:24:41.370 "compare_and_write": false, 00:24:41.370 "abort": false, 00:24:41.370 "seek_hole": false, 00:24:41.370 "seek_data": false, 00:24:41.370 "copy": false, 00:24:41.370 "nvme_iov_md": false 00:24:41.370 }, 00:24:41.370 "driver_specific": { 00:24:41.370 "ftl": { 00:24:41.370 "base_bdev": "d3361ec2-4e3d-4576-bfb2-cc4c4f20be16", 00:24:41.370 "cache": "nvc0n1p0" 00:24:41.370 } 00:24:41.370 } 00:24:41.370 } 00:24:41.370 ]' 00:24:41.370 18:22:51 ftl.ftl_trim -- ftl/trim.sh@60 -- # jq '.[] .num_blocks' 00:24:41.370 18:22:51 ftl.ftl_trim -- ftl/trim.sh@60 -- # nb=23592960 00:24:41.370 18:22:51 ftl.ftl_trim -- ftl/trim.sh@61 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_ftl_unload -b ftl0 00:24:41.370 [2024-12-06 18:22:51.916102] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:41.370 [2024-12-06 18:22:51.916159] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinit core IO channel 00:24:41.370 [2024-12-06 18:22:51.916179] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.004 ms 00:24:41.370 [2024-12-06 18:22:51.916196] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:41.370 [2024-12-06 18:22:51.916238] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on ftl_core_thread 00:24:41.370 [2024-12-06 18:22:51.920513] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:41.370 [2024-12-06 18:22:51.920560] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Unregister IO device 00:24:41.370 [2024-12-06 18:22:51.920582] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.260 ms 00:24:41.370 [2024-12-06 18:22:51.920592] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:41.370 [2024-12-06 18:22:51.921151] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:41.370 [2024-12-06 18:22:51.921176] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Stop core poller 00:24:41.370 [2024-12-06 18:22:51.921190] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.471 ms 00:24:41.370 [2024-12-06 18:22:51.921200] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:41.370 [2024-12-06 18:22:51.924051] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:41.370 [2024-12-06 18:22:51.924077] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist L2P 00:24:41.370 [2024-12-06 18:22:51.924091] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.824 ms 00:24:41.370 [2024-12-06 18:22:51.924101] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:41.370 [2024-12-06 18:22:51.929829] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:41.370 [2024-12-06 18:22:51.929973] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finish L2P trims 00:24:41.370 [2024-12-06 18:22:51.930008] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 5.698 ms 00:24:41.370 [2024-12-06 18:22:51.930019] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:41.630 [2024-12-06 18:22:51.966734] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:41.630 [2024-12-06 18:22:51.966775] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist NV cache metadata 00:24:41.630 [2024-12-06 18:22:51.966795] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 36.651 ms 00:24:41.630 [2024-12-06 18:22:51.966806] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:41.630 [2024-12-06 18:22:51.988673] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:41.630 [2024-12-06 18:22:51.988825] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist valid map metadata 00:24:41.630 [2024-12-06 18:22:51.988853] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 21.809 ms 00:24:41.630 [2024-12-06 18:22:51.988867] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:41.630 [2024-12-06 18:22:51.989111] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:41.630 [2024-12-06 18:22:51.989127] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist P2L metadata 00:24:41.630 [2024-12-06 18:22:51.989141] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.128 ms 00:24:41.630 [2024-12-06 18:22:51.989151] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:41.630 [2024-12-06 18:22:52.025588] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:41.630 [2024-12-06 18:22:52.025655] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist band info metadata 00:24:41.630 [2024-12-06 18:22:52.025675] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 36.458 ms 00:24:41.630 [2024-12-06 18:22:52.025685] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:41.630 [2024-12-06 18:22:52.063096] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:41.630 [2024-12-06 18:22:52.063158] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist trim metadata 00:24:41.630 [2024-12-06 18:22:52.063183] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 37.340 ms 00:24:41.630 [2024-12-06 18:22:52.063193] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:41.630 [2024-12-06 18:22:52.099853] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:41.630 [2024-12-06 18:22:52.099902] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist superblock 00:24:41.631 [2024-12-06 18:22:52.099922] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 36.577 ms 00:24:41.631 [2024-12-06 18:22:52.099932] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:41.631 [2024-12-06 18:22:52.135620] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:41.631 [2024-12-06 18:22:52.135662] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL clean state 00:24:41.631 [2024-12-06 18:22:52.135678] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 35.596 ms 00:24:41.631 [2024-12-06 18:22:52.135688] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:41.631 [2024-12-06 18:22:52.135794] ftl_debug.c: 165:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Bands validity: 00:24:41.631 [2024-12-06 18:22:52.135813] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 1: 0 / 261120 wr_cnt: 0 state: free 00:24:41.631 [2024-12-06 18:22:52.135829] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 2: 0 / 261120 wr_cnt: 0 state: free 00:24:41.631 [2024-12-06 18:22:52.135840] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 3: 0 / 261120 wr_cnt: 0 state: free 00:24:41.631 [2024-12-06 18:22:52.135854] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 4: 0 / 261120 wr_cnt: 0 state: free 00:24:41.631 [2024-12-06 18:22:52.135865] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 5: 0 / 261120 wr_cnt: 0 state: free 00:24:41.631 [2024-12-06 18:22:52.135882] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 6: 0 / 261120 wr_cnt: 0 state: free 00:24:41.631 [2024-12-06 18:22:52.135893] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 7: 0 / 261120 wr_cnt: 0 state: free 00:24:41.631 [2024-12-06 18:22:52.135907] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 8: 0 / 261120 wr_cnt: 0 state: free 00:24:41.631 [2024-12-06 18:22:52.135918] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 9: 0 / 261120 wr_cnt: 0 state: free 00:24:41.631 [2024-12-06 18:22:52.135931] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 10: 0 / 261120 wr_cnt: 0 state: free 00:24:41.631 [2024-12-06 18:22:52.135942] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 11: 0 / 261120 wr_cnt: 0 state: free 00:24:41.631 [2024-12-06 18:22:52.135956] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 12: 0 / 261120 wr_cnt: 0 state: free 00:24:41.631 [2024-12-06 18:22:52.135967] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 13: 0 / 261120 wr_cnt: 0 state: free 00:24:41.631 [2024-12-06 18:22:52.135980] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 14: 0 / 261120 wr_cnt: 0 state: free 00:24:41.631 [2024-12-06 18:22:52.135990] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 15: 0 / 261120 wr_cnt: 0 state: free 00:24:41.631 [2024-12-06 18:22:52.136003] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 16: 0 / 261120 wr_cnt: 0 state: free 00:24:41.631 [2024-12-06 18:22:52.136014] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 17: 0 / 261120 wr_cnt: 0 state: free 00:24:41.631 [2024-12-06 18:22:52.136027] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 18: 0 / 261120 wr_cnt: 0 state: free 00:24:41.631 [2024-12-06 18:22:52.136037] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 19: 0 / 261120 wr_cnt: 0 state: free 00:24:41.631 [2024-12-06 18:22:52.136069] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 20: 0 / 261120 wr_cnt: 0 state: free 00:24:41.631 [2024-12-06 18:22:52.136080] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 21: 0 / 261120 wr_cnt: 0 state: free 00:24:41.631 [2024-12-06 18:22:52.136096] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 22: 0 / 261120 wr_cnt: 0 state: free 00:24:41.631 [2024-12-06 18:22:52.136106] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 23: 0 / 261120 wr_cnt: 0 state: free 00:24:41.631 [2024-12-06 18:22:52.136120] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 24: 0 / 261120 wr_cnt: 0 state: free 00:24:41.631 [2024-12-06 18:22:52.136131] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 25: 0 / 261120 wr_cnt: 0 state: free 00:24:41.631 [2024-12-06 18:22:52.136144] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 26: 0 / 261120 wr_cnt: 0 state: free 00:24:41.631 [2024-12-06 18:22:52.136155] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 27: 0 / 261120 wr_cnt: 0 state: free 00:24:41.631 [2024-12-06 18:22:52.136168] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 28: 0 / 261120 wr_cnt: 0 state: free 00:24:41.631 [2024-12-06 18:22:52.136179] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 29: 0 / 261120 wr_cnt: 0 state: free 00:24:41.631 [2024-12-06 18:22:52.136191] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 30: 0 / 261120 wr_cnt: 0 state: free 00:24:41.631 [2024-12-06 18:22:52.136202] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 31: 0 / 261120 wr_cnt: 0 state: free 00:24:41.631 [2024-12-06 18:22:52.136215] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 32: 0 / 261120 wr_cnt: 0 state: free 00:24:41.631 [2024-12-06 18:22:52.136227] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 33: 0 / 261120 wr_cnt: 0 state: free 00:24:41.631 [2024-12-06 18:22:52.136240] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 34: 0 / 261120 wr_cnt: 0 state: free 00:24:41.631 [2024-12-06 18:22:52.136251] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 35: 0 / 261120 wr_cnt: 0 state: free 00:24:41.631 [2024-12-06 18:22:52.136286] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 36: 0 / 261120 wr_cnt: 0 state: free 00:24:41.631 [2024-12-06 18:22:52.136298] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 37: 0 / 261120 wr_cnt: 0 state: free 00:24:41.631 [2024-12-06 18:22:52.136314] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 38: 0 / 261120 wr_cnt: 0 state: free 00:24:41.631 [2024-12-06 18:22:52.136324] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 39: 0 / 261120 wr_cnt: 0 state: free 00:24:41.631 [2024-12-06 18:22:52.136338] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 40: 0 / 261120 wr_cnt: 0 state: free 00:24:41.631 [2024-12-06 18:22:52.136349] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 41: 0 / 261120 wr_cnt: 0 state: free 00:24:41.631 [2024-12-06 18:22:52.136362] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 42: 0 / 261120 wr_cnt: 0 state: free 00:24:41.631 [2024-12-06 18:22:52.136373] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 43: 0 / 261120 wr_cnt: 0 state: free 00:24:41.631 [2024-12-06 18:22:52.136388] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 44: 0 / 261120 wr_cnt: 0 state: free 00:24:41.631 [2024-12-06 18:22:52.136398] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 45: 0 / 261120 wr_cnt: 0 state: free 00:24:41.631 [2024-12-06 18:22:52.136411] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 46: 0 / 261120 wr_cnt: 0 state: free 00:24:41.631 [2024-12-06 18:22:52.136422] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 47: 0 / 261120 wr_cnt: 0 state: free 00:24:41.631 [2024-12-06 18:22:52.136436] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 48: 0 / 261120 wr_cnt: 0 state: free 00:24:41.631 [2024-12-06 18:22:52.136446] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 49: 0 / 261120 wr_cnt: 0 state: free 00:24:41.631 [2024-12-06 18:22:52.136460] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 50: 0 / 261120 wr_cnt: 0 state: free 00:24:41.631 [2024-12-06 18:22:52.136470] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 51: 0 / 261120 wr_cnt: 0 state: free 00:24:41.631 [2024-12-06 18:22:52.136483] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 52: 0 / 261120 wr_cnt: 0 state: free 00:24:41.631 [2024-12-06 18:22:52.136494] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 53: 0 / 261120 wr_cnt: 0 state: free 00:24:41.631 [2024-12-06 18:22:52.136510] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 54: 0 / 261120 wr_cnt: 0 state: free 00:24:41.631 [2024-12-06 18:22:52.136521] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 55: 0 / 261120 wr_cnt: 0 state: free 00:24:41.631 [2024-12-06 18:22:52.136534] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 56: 0 / 261120 wr_cnt: 0 state: free 00:24:41.631 [2024-12-06 18:22:52.136551] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 57: 0 / 261120 wr_cnt: 0 state: free 00:24:41.631 [2024-12-06 18:22:52.136564] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 58: 0 / 261120 wr_cnt: 0 state: free 00:24:41.631 [2024-12-06 18:22:52.136575] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 59: 0 / 261120 wr_cnt: 0 state: free 00:24:41.631 [2024-12-06 18:22:52.136588] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 60: 0 / 261120 wr_cnt: 0 state: free 00:24:41.631 [2024-12-06 18:22:52.136598] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 61: 0 / 261120 wr_cnt: 0 state: free 00:24:41.631 [2024-12-06 18:22:52.136611] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 62: 0 / 261120 wr_cnt: 0 state: free 00:24:41.631 [2024-12-06 18:22:52.136622] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 63: 0 / 261120 wr_cnt: 0 state: free 00:24:41.631 [2024-12-06 18:22:52.136635] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 64: 0 / 261120 wr_cnt: 0 state: free 00:24:41.631 [2024-12-06 18:22:52.136646] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 65: 0 / 261120 wr_cnt: 0 state: free 00:24:41.631 [2024-12-06 18:22:52.136660] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 66: 0 / 261120 wr_cnt: 0 state: free 00:24:41.631 [2024-12-06 18:22:52.136670] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 67: 0 / 261120 wr_cnt: 0 state: free 00:24:41.631 [2024-12-06 18:22:52.136684] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 68: 0 / 261120 wr_cnt: 0 state: free 00:24:41.631 [2024-12-06 18:22:52.136695] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 69: 0 / 261120 wr_cnt: 0 state: free 00:24:41.631 [2024-12-06 18:22:52.136842] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 70: 0 / 261120 wr_cnt: 0 state: free 00:24:41.631 [2024-12-06 18:22:52.136852] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 71: 0 / 261120 wr_cnt: 0 state: free 00:24:41.631 [2024-12-06 18:22:52.136866] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 72: 0 / 261120 wr_cnt: 0 state: free 00:24:41.631 [2024-12-06 18:22:52.136876] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 73: 0 / 261120 wr_cnt: 0 state: free 00:24:41.631 [2024-12-06 18:22:52.136890] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 74: 0 / 261120 wr_cnt: 0 state: free 00:24:41.631 [2024-12-06 18:22:52.136900] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 75: 0 / 261120 wr_cnt: 0 state: free 00:24:41.631 [2024-12-06 18:22:52.136913] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 76: 0 / 261120 wr_cnt: 0 state: free 00:24:41.632 [2024-12-06 18:22:52.136924] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 77: 0 / 261120 wr_cnt: 0 state: free 00:24:41.632 [2024-12-06 18:22:52.136937] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 78: 0 / 261120 wr_cnt: 0 state: free 00:24:41.632 [2024-12-06 18:22:52.136947] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 79: 0 / 261120 wr_cnt: 0 state: free 00:24:41.632 [2024-12-06 18:22:52.136960] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 80: 0 / 261120 wr_cnt: 0 state: free 00:24:41.632 [2024-12-06 18:22:52.136971] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 81: 0 / 261120 wr_cnt: 0 state: free 00:24:41.632 [2024-12-06 18:22:52.136984] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 82: 0 / 261120 wr_cnt: 0 state: free 00:24:41.632 [2024-12-06 18:22:52.136994] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 83: 0 / 261120 wr_cnt: 0 state: free 00:24:41.632 [2024-12-06 18:22:52.137008] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 84: 0 / 261120 wr_cnt: 0 state: free 00:24:41.632 [2024-12-06 18:22:52.137018] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 85: 0 / 261120 wr_cnt: 0 state: free 00:24:41.632 [2024-12-06 18:22:52.137034] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 86: 0 / 261120 wr_cnt: 0 state: free 00:24:41.632 [2024-12-06 18:22:52.137045] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 87: 0 / 261120 wr_cnt: 0 state: free 00:24:41.632 [2024-12-06 18:22:52.137058] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 88: 0 / 261120 wr_cnt: 0 state: free 00:24:41.632 [2024-12-06 18:22:52.137068] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 89: 0 / 261120 wr_cnt: 0 state: free 00:24:41.632 [2024-12-06 18:22:52.137082] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 90: 0 / 261120 wr_cnt: 0 state: free 00:24:41.632 [2024-12-06 18:22:52.137092] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 91: 0 / 261120 wr_cnt: 0 state: free 00:24:41.632 [2024-12-06 18:22:52.137106] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 92: 0 / 261120 wr_cnt: 0 state: free 00:24:41.632 [2024-12-06 18:22:52.137116] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 93: 0 / 261120 wr_cnt: 0 state: free 00:24:41.632 [2024-12-06 18:22:52.137130] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 94: 0 / 261120 wr_cnt: 0 state: free 00:24:41.632 [2024-12-06 18:22:52.137142] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 95: 0 / 261120 wr_cnt: 0 state: free 00:24:41.632 [2024-12-06 18:22:52.137158] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 96: 0 / 261120 wr_cnt: 0 state: free 00:24:41.632 [2024-12-06 18:22:52.137169] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 97: 0 / 261120 wr_cnt: 0 state: free 00:24:41.632 [2024-12-06 18:22:52.137183] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 98: 0 / 261120 wr_cnt: 0 state: free 00:24:41.632 [2024-12-06 18:22:52.137194] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 99: 0 / 261120 wr_cnt: 0 state: free 00:24:41.632 [2024-12-06 18:22:52.137207] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 100: 0 / 261120 wr_cnt: 0 state: free 00:24:41.632 [2024-12-06 18:22:52.137226] ftl_debug.c: 211:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] 00:24:41.632 [2024-12-06 18:22:52.137241] ftl_debug.c: 212:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] device UUID: 0182d045-795a-443b-ad13-478c5d3e8b79 00:24:41.632 [2024-12-06 18:22:52.137252] ftl_debug.c: 213:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total valid LBAs: 0 00:24:41.632 [2024-12-06 18:22:52.137273] ftl_debug.c: 214:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total writes: 960 00:24:41.632 [2024-12-06 18:22:52.137283] ftl_debug.c: 215:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] user writes: 0 00:24:41.632 [2024-12-06 18:22:52.137299] ftl_debug.c: 216:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] WAF: inf 00:24:41.632 [2024-12-06 18:22:52.137310] ftl_debug.c: 218:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] limits: 00:24:41.632 [2024-12-06 18:22:52.137323] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] crit: 0 00:24:41.632 [2024-12-06 18:22:52.137333] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] high: 0 00:24:41.632 [2024-12-06 18:22:52.137345] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] low: 0 00:24:41.632 [2024-12-06 18:22:52.137354] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] start: 0 00:24:41.632 [2024-12-06 18:22:52.137367] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:41.632 [2024-12-06 18:22:52.137377] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Dump statistics 00:24:41.632 [2024-12-06 18:22:52.137390] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.576 ms 00:24:41.632 [2024-12-06 18:22:52.137400] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:41.632 [2024-12-06 18:22:52.157583] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:41.632 [2024-12-06 18:22:52.157623] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize L2P 00:24:41.632 [2024-12-06 18:22:52.157643] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 20.174 ms 00:24:41.632 [2024-12-06 18:22:52.157653] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:41.632 [2024-12-06 18:22:52.158279] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:41.632 [2024-12-06 18:22:52.158295] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize P2L checkpointing 00:24:41.632 [2024-12-06 18:22:52.158309] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.541 ms 00:24:41.632 [2024-12-06 18:22:52.158319] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:41.891 [2024-12-06 18:22:52.229852] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:24:41.891 [2024-12-06 18:22:52.229927] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:24:41.891 [2024-12-06 18:22:52.229946] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:24:41.892 [2024-12-06 18:22:52.229957] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:41.892 [2024-12-06 18:22:52.230099] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:24:41.892 [2024-12-06 18:22:52.230111] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:24:41.892 [2024-12-06 18:22:52.230124] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:24:41.892 [2024-12-06 18:22:52.230134] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:41.892 [2024-12-06 18:22:52.230209] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:24:41.892 [2024-12-06 18:22:52.230222] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:24:41.892 [2024-12-06 18:22:52.230242] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:24:41.892 [2024-12-06 18:22:52.230252] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:41.892 [2024-12-06 18:22:52.230302] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:24:41.892 [2024-12-06 18:22:52.230313] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:24:41.892 [2024-12-06 18:22:52.230327] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:24:41.892 [2024-12-06 18:22:52.230337] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:41.892 [2024-12-06 18:22:52.362742] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:24:41.892 [2024-12-06 18:22:52.362808] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:24:41.892 [2024-12-06 18:22:52.362827] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:24:41.892 [2024-12-06 18:22:52.362838] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:41.892 [2024-12-06 18:22:52.464776] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:24:41.892 [2024-12-06 18:22:52.464841] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:24:41.892 [2024-12-06 18:22:52.464859] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:24:41.892 [2024-12-06 18:22:52.464870] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:41.892 [2024-12-06 18:22:52.465019] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:24:41.892 [2024-12-06 18:22:52.465032] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:24:41.892 [2024-12-06 18:22:52.465049] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:24:41.892 [2024-12-06 18:22:52.465063] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:41.892 [2024-12-06 18:22:52.465119] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:24:41.892 [2024-12-06 18:22:52.465130] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:24:41.892 [2024-12-06 18:22:52.465143] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:24:41.892 [2024-12-06 18:22:52.465153] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:41.892 [2024-12-06 18:22:52.465299] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:24:41.892 [2024-12-06 18:22:52.465313] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:24:41.892 [2024-12-06 18:22:52.465326] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:24:41.892 [2024-12-06 18:22:52.465354] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:41.892 [2024-12-06 18:22:52.465419] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:24:41.892 [2024-12-06 18:22:52.465432] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize superblock 00:24:41.892 [2024-12-06 18:22:52.465444] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:24:41.892 [2024-12-06 18:22:52.465454] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:41.892 [2024-12-06 18:22:52.465510] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:24:41.892 [2024-12-06 18:22:52.465521] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:24:41.892 [2024-12-06 18:22:52.465537] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:24:41.892 [2024-12-06 18:22:52.465546] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:41.892 [2024-12-06 18:22:52.465607] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:24:41.892 [2024-12-06 18:22:52.465619] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:24:41.892 [2024-12-06 18:22:52.465632] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:24:41.892 [2024-12-06 18:22:52.465642] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:41.892 [2024-12-06 18:22:52.465839] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL shutdown', duration = 550.612 ms, result 0 00:24:42.277 true 00:24:42.277 18:22:52 ftl.ftl_trim -- ftl/trim.sh@63 -- # killprocess 78133 00:24:42.277 18:22:52 ftl.ftl_trim -- common/autotest_common.sh@954 -- # '[' -z 78133 ']' 00:24:42.277 18:22:52 ftl.ftl_trim -- common/autotest_common.sh@958 -- # kill -0 78133 00:24:42.277 18:22:52 ftl.ftl_trim -- common/autotest_common.sh@959 -- # uname 00:24:42.277 18:22:52 ftl.ftl_trim -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:24:42.277 18:22:52 ftl.ftl_trim -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 78133 00:24:42.277 killing process with pid 78133 00:24:42.277 18:22:52 ftl.ftl_trim -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:24:42.277 18:22:52 ftl.ftl_trim -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:24:42.277 18:22:52 ftl.ftl_trim -- common/autotest_common.sh@972 -- # echo 'killing process with pid 78133' 00:24:42.277 18:22:52 ftl.ftl_trim -- common/autotest_common.sh@973 -- # kill 78133 00:24:42.277 18:22:52 ftl.ftl_trim -- common/autotest_common.sh@978 -- # wait 78133 00:24:46.478 18:22:56 ftl.ftl_trim -- ftl/trim.sh@66 -- # dd if=/dev/urandom bs=4K count=65536 00:24:47.412 65536+0 records in 00:24:47.412 65536+0 records out 00:24:47.412 268435456 bytes (268 MB, 256 MiB) copied, 0.998109 s, 269 MB/s 00:24:47.412 18:22:57 ftl.ftl_trim -- ftl/trim.sh@69 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/ftl/random_pattern --ob=ftl0 --json=/home/vagrant/spdk_repo/spdk/test/ftl/config/ftl.json 00:24:47.671 [2024-12-06 18:22:58.014199] Starting SPDK v25.01-pre git sha1 1148849d6 / DPDK 24.03.0 initialization... 00:24:47.671 [2024-12-06 18:22:58.014346] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid78338 ] 00:24:47.671 [2024-12-06 18:22:58.192493] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:24:47.931 [2024-12-06 18:22:58.312460] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:24:48.190 [2024-12-06 18:22:58.709922] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:24:48.190 [2024-12-06 18:22:58.710285] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:24:48.450 [2024-12-06 18:22:58.874239] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:48.450 [2024-12-06 18:22:58.874317] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Check configuration 00:24:48.450 [2024-12-06 18:22:58.874333] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.005 ms 00:24:48.450 [2024-12-06 18:22:58.874344] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:48.450 [2024-12-06 18:22:58.877606] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:48.450 [2024-12-06 18:22:58.877649] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:24:48.450 [2024-12-06 18:22:58.877663] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.245 ms 00:24:48.450 [2024-12-06 18:22:58.877673] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:48.450 [2024-12-06 18:22:58.877773] mngt/ftl_mngt_bdev.c: 196:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using nvc0n1p0 as write buffer cache 00:24:48.450 [2024-12-06 18:22:58.878853] mngt/ftl_mngt_bdev.c: 236:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using bdev as NV Cache device 00:24:48.450 [2024-12-06 18:22:58.878890] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:48.450 [2024-12-06 18:22:58.878901] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:24:48.450 [2024-12-06 18:22:58.878912] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.127 ms 00:24:48.450 [2024-12-06 18:22:58.878922] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:48.450 [2024-12-06 18:22:58.880408] mngt/ftl_mngt_md.c: 455:ftl_mngt_load_sb: *NOTICE*: [FTL][ftl0] SHM: clean 0, shm_clean 0 00:24:48.450 [2024-12-06 18:22:58.899622] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:48.450 [2024-12-06 18:22:58.899671] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Load super block 00:24:48.450 [2024-12-06 18:22:58.899688] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 19.244 ms 00:24:48.450 [2024-12-06 18:22:58.899698] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:48.450 [2024-12-06 18:22:58.899821] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:48.450 [2024-12-06 18:22:58.899836] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Validate super block 00:24:48.450 [2024-12-06 18:22:58.899847] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.032 ms 00:24:48.450 [2024-12-06 18:22:58.899857] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:48.450 [2024-12-06 18:22:58.906892] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:48.450 [2024-12-06 18:22:58.907106] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:24:48.450 [2024-12-06 18:22:58.907127] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 7.000 ms 00:24:48.450 [2024-12-06 18:22:58.907139] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:48.450 [2024-12-06 18:22:58.907260] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:48.450 [2024-12-06 18:22:58.907298] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:24:48.450 [2024-12-06 18:22:58.907309] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.066 ms 00:24:48.450 [2024-12-06 18:22:58.907319] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:48.450 [2024-12-06 18:22:58.907354] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:48.450 [2024-12-06 18:22:58.907366] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Register IO device 00:24:48.450 [2024-12-06 18:22:58.907376] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.008 ms 00:24:48.451 [2024-12-06 18:22:58.907386] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:48.451 [2024-12-06 18:22:58.907412] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on ftl_core_thread 00:24:48.451 [2024-12-06 18:22:58.912197] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:48.451 [2024-12-06 18:22:58.912232] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:24:48.451 [2024-12-06 18:22:58.912245] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.801 ms 00:24:48.451 [2024-12-06 18:22:58.912255] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:48.451 [2024-12-06 18:22:58.912344] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:48.451 [2024-12-06 18:22:58.912358] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Decorate bands 00:24:48.451 [2024-12-06 18:22:58.912369] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.010 ms 00:24:48.451 [2024-12-06 18:22:58.912379] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:48.451 [2024-12-06 18:22:58.912407] ftl_layout.c: 613:ftl_layout_setup: *NOTICE*: [FTL][ftl0] FTL layout setup mode 0 00:24:48.451 [2024-12-06 18:22:58.912431] upgrade/ftl_sb_v5.c: 278:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob load 0x150 bytes 00:24:48.451 [2024-12-06 18:22:58.912466] upgrade/ftl_sb_v5.c: 287:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] base layout blob load 0x48 bytes 00:24:48.451 [2024-12-06 18:22:58.912484] upgrade/ftl_sb_v5.c: 294:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] layout blob load 0x190 bytes 00:24:48.451 [2024-12-06 18:22:58.912574] upgrade/ftl_sb_v5.c: 92:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob store 0x150 bytes 00:24:48.451 [2024-12-06 18:22:58.912587] upgrade/ftl_sb_v5.c: 101:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] base layout blob store 0x48 bytes 00:24:48.451 [2024-12-06 18:22:58.912600] upgrade/ftl_sb_v5.c: 109:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] layout blob store 0x190 bytes 00:24:48.451 [2024-12-06 18:22:58.912616] ftl_layout.c: 685:ftl_layout_setup: *NOTICE*: [FTL][ftl0] Base device capacity: 103424.00 MiB 00:24:48.451 [2024-12-06 18:22:58.912628] ftl_layout.c: 687:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache device capacity: 5171.00 MiB 00:24:48.451 [2024-12-06 18:22:58.912639] ftl_layout.c: 689:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P entries: 23592960 00:24:48.451 [2024-12-06 18:22:58.912649] ftl_layout.c: 690:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P address size: 4 00:24:48.451 [2024-12-06 18:22:58.912658] ftl_layout.c: 691:ftl_layout_setup: *NOTICE*: [FTL][ftl0] P2L checkpoint pages: 2048 00:24:48.451 [2024-12-06 18:22:58.912668] ftl_layout.c: 692:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache chunk count 5 00:24:48.451 [2024-12-06 18:22:58.912679] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:48.451 [2024-12-06 18:22:58.912689] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize layout 00:24:48.451 [2024-12-06 18:22:58.912699] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.275 ms 00:24:48.451 [2024-12-06 18:22:58.912709] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:48.451 [2024-12-06 18:22:58.912786] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:48.451 [2024-12-06 18:22:58.912800] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Verify layout 00:24:48.451 [2024-12-06 18:22:58.912811] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.056 ms 00:24:48.451 [2024-12-06 18:22:58.912821] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:48.451 [2024-12-06 18:22:58.912913] ftl_layout.c: 768:ftl_layout_dump: *NOTICE*: [FTL][ftl0] NV cache layout: 00:24:48.451 [2024-12-06 18:22:58.912926] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb 00:24:48.451 [2024-12-06 18:22:58.912936] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:24:48.451 [2024-12-06 18:22:58.912946] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:24:48.451 [2024-12-06 18:22:58.912957] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region l2p 00:24:48.451 [2024-12-06 18:22:58.912966] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.12 MiB 00:24:48.451 [2024-12-06 18:22:58.912976] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 90.00 MiB 00:24:48.451 [2024-12-06 18:22:58.912985] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md 00:24:48.451 [2024-12-06 18:22:58.912995] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 90.12 MiB 00:24:48.451 [2024-12-06 18:22:58.913004] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:24:48.451 [2024-12-06 18:22:58.913014] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md_mirror 00:24:48.451 [2024-12-06 18:22:58.913034] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 90.62 MiB 00:24:48.451 [2024-12-06 18:22:58.913043] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:24:48.451 [2024-12-06 18:22:58.913053] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md 00:24:48.451 [2024-12-06 18:22:58.913063] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.88 MiB 00:24:48.451 [2024-12-06 18:22:58.913072] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:24:48.451 [2024-12-06 18:22:58.913081] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md_mirror 00:24:48.451 [2024-12-06 18:22:58.913091] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 124.00 MiB 00:24:48.451 [2024-12-06 18:22:58.913100] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:24:48.451 [2024-12-06 18:22:58.913109] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l0 00:24:48.451 [2024-12-06 18:22:58.913118] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 91.12 MiB 00:24:48.451 [2024-12-06 18:22:58.913127] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:24:48.451 [2024-12-06 18:22:58.913136] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l1 00:24:48.451 [2024-12-06 18:22:58.913146] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 99.12 MiB 00:24:48.451 [2024-12-06 18:22:58.913155] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:24:48.451 [2024-12-06 18:22:58.913164] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l2 00:24:48.451 [2024-12-06 18:22:58.913173] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 107.12 MiB 00:24:48.451 [2024-12-06 18:22:58.913182] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:24:48.451 [2024-12-06 18:22:58.913191] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l3 00:24:48.451 [2024-12-06 18:22:58.913200] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 115.12 MiB 00:24:48.451 [2024-12-06 18:22:58.913208] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:24:48.451 [2024-12-06 18:22:58.913217] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md 00:24:48.451 [2024-12-06 18:22:58.913226] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.12 MiB 00:24:48.451 [2024-12-06 18:22:58.913235] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:24:48.451 [2024-12-06 18:22:58.913244] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md_mirror 00:24:48.451 [2024-12-06 18:22:58.913253] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.38 MiB 00:24:48.451 [2024-12-06 18:22:58.913261] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:24:48.451 [2024-12-06 18:22:58.913283] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log 00:24:48.451 [2024-12-06 18:22:58.913292] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.62 MiB 00:24:48.451 [2024-12-06 18:22:58.913301] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:24:48.451 [2024-12-06 18:22:58.913310] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log_mirror 00:24:48.451 [2024-12-06 18:22:58.913319] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.75 MiB 00:24:48.451 [2024-12-06 18:22:58.913330] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:24:48.451 [2024-12-06 18:22:58.913340] ftl_layout.c: 775:ftl_layout_dump: *NOTICE*: [FTL][ftl0] Base device layout: 00:24:48.451 [2024-12-06 18:22:58.913350] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb_mirror 00:24:48.452 [2024-12-06 18:22:58.913364] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:24:48.452 [2024-12-06 18:22:58.913374] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:24:48.452 [2024-12-06 18:22:58.913384] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region vmap 00:24:48.452 [2024-12-06 18:22:58.913393] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 102400.25 MiB 00:24:48.452 [2024-12-06 18:22:58.913402] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 3.38 MiB 00:24:48.452 [2024-12-06 18:22:58.913412] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region data_btm 00:24:48.452 [2024-12-06 18:22:58.913421] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.25 MiB 00:24:48.452 [2024-12-06 18:22:58.913430] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 102400.00 MiB 00:24:48.452 [2024-12-06 18:22:58.913440] upgrade/ftl_sb_v5.c: 408:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - nvc: 00:24:48.452 [2024-12-06 18:22:58.913452] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x0 ver:5 blk_offs:0x0 blk_sz:0x20 00:24:48.452 [2024-12-06 18:22:58.913463] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x2 ver:0 blk_offs:0x20 blk_sz:0x5a00 00:24:48.452 [2024-12-06 18:22:58.913473] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x3 ver:2 blk_offs:0x5a20 blk_sz:0x80 00:24:48.452 [2024-12-06 18:22:58.913483] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x4 ver:2 blk_offs:0x5aa0 blk_sz:0x80 00:24:48.452 [2024-12-06 18:22:58.913493] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xa ver:2 blk_offs:0x5b20 blk_sz:0x800 00:24:48.452 [2024-12-06 18:22:58.913503] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xb ver:2 blk_offs:0x6320 blk_sz:0x800 00:24:48.452 [2024-12-06 18:22:58.913513] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xc ver:2 blk_offs:0x6b20 blk_sz:0x800 00:24:48.452 [2024-12-06 18:22:58.913523] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xd ver:2 blk_offs:0x7320 blk_sz:0x800 00:24:48.452 [2024-12-06 18:22:58.913534] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xe ver:0 blk_offs:0x7b20 blk_sz:0x40 00:24:48.452 [2024-12-06 18:22:58.913544] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xf ver:0 blk_offs:0x7b60 blk_sz:0x40 00:24:48.452 [2024-12-06 18:22:58.913554] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x10 ver:1 blk_offs:0x7ba0 blk_sz:0x20 00:24:48.452 [2024-12-06 18:22:58.913565] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x11 ver:1 blk_offs:0x7bc0 blk_sz:0x20 00:24:48.452 [2024-12-06 18:22:58.913575] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x6 ver:2 blk_offs:0x7be0 blk_sz:0x20 00:24:48.452 [2024-12-06 18:22:58.913585] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x7 ver:2 blk_offs:0x7c00 blk_sz:0x20 00:24:48.452 [2024-12-06 18:22:58.913595] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x7c20 blk_sz:0x13b6e0 00:24:48.452 [2024-12-06 18:22:58.913604] upgrade/ftl_sb_v5.c: 422:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - base dev: 00:24:48.452 [2024-12-06 18:22:58.913615] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x1 ver:5 blk_offs:0x0 blk_sz:0x20 00:24:48.452 [2024-12-06 18:22:58.913626] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x20 blk_sz:0x20 00:24:48.452 [2024-12-06 18:22:58.913637] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x9 ver:0 blk_offs:0x40 blk_sz:0x1900000 00:24:48.452 [2024-12-06 18:22:58.913647] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x5 ver:0 blk_offs:0x1900040 blk_sz:0x360 00:24:48.452 [2024-12-06 18:22:58.913657] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x19003a0 blk_sz:0x3fc60 00:24:48.452 [2024-12-06 18:22:58.913668] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:48.452 [2024-12-06 18:22:58.913682] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Layout upgrade 00:24:48.452 [2024-12-06 18:22:58.913693] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.810 ms 00:24:48.452 [2024-12-06 18:22:58.913703] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:48.452 [2024-12-06 18:22:58.952867] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:48.452 [2024-12-06 18:22:58.952921] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:24:48.452 [2024-12-06 18:22:58.952937] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 39.165 ms 00:24:48.452 [2024-12-06 18:22:58.952948] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:48.452 [2024-12-06 18:22:58.953113] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:48.452 [2024-12-06 18:22:58.953126] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize band addresses 00:24:48.452 [2024-12-06 18:22:58.953138] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.056 ms 00:24:48.452 [2024-12-06 18:22:58.953148] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:48.452 [2024-12-06 18:22:59.013740] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:48.452 [2024-12-06 18:22:59.013794] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:24:48.452 [2024-12-06 18:22:59.013813] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 60.664 ms 00:24:48.452 [2024-12-06 18:22:59.013825] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:48.452 [2024-12-06 18:22:59.013951] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:48.452 [2024-12-06 18:22:59.013965] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:24:48.452 [2024-12-06 18:22:59.013975] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.003 ms 00:24:48.452 [2024-12-06 18:22:59.013986] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:48.452 [2024-12-06 18:22:59.014443] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:48.452 [2024-12-06 18:22:59.014458] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:24:48.452 [2024-12-06 18:22:59.014476] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.436 ms 00:24:48.452 [2024-12-06 18:22:59.014486] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:48.452 [2024-12-06 18:22:59.014609] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:48.452 [2024-12-06 18:22:59.014623] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:24:48.452 [2024-12-06 18:22:59.014634] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.097 ms 00:24:48.452 [2024-12-06 18:22:59.014644] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:48.712 [2024-12-06 18:22:59.035660] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:48.712 [2024-12-06 18:22:59.035711] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:24:48.712 [2024-12-06 18:22:59.035727] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 21.025 ms 00:24:48.712 [2024-12-06 18:22:59.035738] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:48.712 [2024-12-06 18:22:59.056473] ftl_nv_cache.c:1772:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: full chunks = 0, empty chunks = 4 00:24:48.712 [2024-12-06 18:22:59.056547] ftl_nv_cache.c:1776:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: state loaded successfully 00:24:48.712 [2024-12-06 18:22:59.056566] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:48.712 [2024-12-06 18:22:59.056578] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore NV cache metadata 00:24:48.712 [2024-12-06 18:22:59.056592] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 20.721 ms 00:24:48.712 [2024-12-06 18:22:59.056602] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:48.712 [2024-12-06 18:22:59.087668] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:48.712 [2024-12-06 18:22:59.087909] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore valid map metadata 00:24:48.712 [2024-12-06 18:22:59.087935] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 30.974 ms 00:24:48.712 [2024-12-06 18:22:59.087946] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:48.712 [2024-12-06 18:22:59.106938] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:48.712 [2024-12-06 18:22:59.106988] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore band info metadata 00:24:48.712 [2024-12-06 18:22:59.107004] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 18.917 ms 00:24:48.712 [2024-12-06 18:22:59.107014] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:48.712 [2024-12-06 18:22:59.125656] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:48.712 [2024-12-06 18:22:59.125831] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore trim metadata 00:24:48.712 [2024-12-06 18:22:59.125853] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 18.580 ms 00:24:48.712 [2024-12-06 18:22:59.125864] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:48.712 [2024-12-06 18:22:59.126769] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:48.712 [2024-12-06 18:22:59.126803] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize P2L checkpointing 00:24:48.712 [2024-12-06 18:22:59.126816] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.725 ms 00:24:48.712 [2024-12-06 18:22:59.126826] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:48.712 [2024-12-06 18:22:59.214635] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:48.712 [2024-12-06 18:22:59.214697] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore P2L checkpoints 00:24:48.712 [2024-12-06 18:22:59.214714] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 87.919 ms 00:24:48.712 [2024-12-06 18:22:59.214726] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:48.712 [2024-12-06 18:22:59.226724] ftl_l2p_cache.c: 458:ftl_l2p_cache_init: *NOTICE*: l2p maximum resident size is: 59 (of 60) MiB 00:24:48.712 [2024-12-06 18:22:59.243187] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:48.712 [2024-12-06 18:22:59.243248] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize L2P 00:24:48.712 [2024-12-06 18:22:59.243278] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 28.370 ms 00:24:48.712 [2024-12-06 18:22:59.243290] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:48.712 [2024-12-06 18:22:59.243433] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:48.712 [2024-12-06 18:22:59.243448] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore L2P 00:24:48.712 [2024-12-06 18:22:59.243459] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.006 ms 00:24:48.712 [2024-12-06 18:22:59.243470] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:48.712 [2024-12-06 18:22:59.243526] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:48.712 [2024-12-06 18:22:59.243538] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize band initialization 00:24:48.712 [2024-12-06 18:22:59.243549] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.033 ms 00:24:48.712 [2024-12-06 18:22:59.243559] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:48.712 [2024-12-06 18:22:59.243594] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:48.712 [2024-12-06 18:22:59.243610] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Start core poller 00:24:48.712 [2024-12-06 18:22:59.243620] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.012 ms 00:24:48.712 [2024-12-06 18:22:59.243630] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:48.712 [2024-12-06 18:22:59.243667] mngt/ftl_mngt_self_test.c: 208:ftl_mngt_self_test: *NOTICE*: [FTL][ftl0] Self test skipped 00:24:48.712 [2024-12-06 18:22:59.243679] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:48.712 [2024-12-06 18:22:59.243689] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Self test on startup 00:24:48.712 [2024-12-06 18:22:59.243699] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.013 ms 00:24:48.712 [2024-12-06 18:22:59.243709] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:48.712 [2024-12-06 18:22:59.280315] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:48.712 [2024-12-06 18:22:59.280484] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL dirty state 00:24:48.712 [2024-12-06 18:22:59.280507] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 36.643 ms 00:24:48.712 [2024-12-06 18:22:59.280519] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:48.712 [2024-12-06 18:22:59.280685] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:48.712 [2024-12-06 18:22:59.280700] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize initialization 00:24:48.712 [2024-12-06 18:22:59.280712] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.033 ms 00:24:48.712 [2024-12-06 18:22:59.280723] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:48.712 [2024-12-06 18:22:59.281612] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:24:48.712 [2024-12-06 18:22:59.285963] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL startup', duration = 407.728 ms, result 0 00:24:48.970 [2024-12-06 18:22:59.286949] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:24:48.970 [2024-12-06 18:22:59.305434] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:24:49.902  [2024-12-06T18:23:01.412Z] Copying: 23/256 [MB] (23 MBps) [2024-12-06T18:23:02.433Z] Copying: 46/256 [MB] (23 MBps) [2024-12-06T18:23:03.367Z] Copying: 68/256 [MB] (22 MBps) [2024-12-06T18:23:04.740Z] Copying: 90/256 [MB] (21 MBps) [2024-12-06T18:23:05.306Z] Copying: 111/256 [MB] (21 MBps) [2024-12-06T18:23:06.678Z] Copying: 134/256 [MB] (22 MBps) [2024-12-06T18:23:07.613Z] Copying: 156/256 [MB] (22 MBps) [2024-12-06T18:23:08.548Z] Copying: 178/256 [MB] (21 MBps) [2024-12-06T18:23:09.554Z] Copying: 200/256 [MB] (22 MBps) [2024-12-06T18:23:10.491Z] Copying: 222/256 [MB] (22 MBps) [2024-12-06T18:23:11.060Z] Copying: 245/256 [MB] (22 MBps) [2024-12-06T18:23:11.060Z] Copying: 256/256 [MB] (average 22 MBps)[2024-12-06 18:23:10.757918] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:25:00.484 [2024-12-06 18:23:10.772911] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:00.484 [2024-12-06 18:23:10.773081] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinit core IO channel 00:25:00.484 [2024-12-06 18:23:10.773197] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.003 ms 00:25:00.484 [2024-12-06 18:23:10.773224] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:00.484 [2024-12-06 18:23:10.773258] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on ftl_core_thread 00:25:00.484 [2024-12-06 18:23:10.777350] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:00.484 [2024-12-06 18:23:10.777381] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Unregister IO device 00:25:00.484 [2024-12-06 18:23:10.777394] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.065 ms 00:25:00.484 [2024-12-06 18:23:10.777404] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:00.484 [2024-12-06 18:23:10.779499] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:00.484 [2024-12-06 18:23:10.779644] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Stop core poller 00:25:00.484 [2024-12-06 18:23:10.779665] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.058 ms 00:25:00.484 [2024-12-06 18:23:10.779676] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:00.484 [2024-12-06 18:23:10.786861] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:00.484 [2024-12-06 18:23:10.787032] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist L2P 00:25:00.484 [2024-12-06 18:23:10.787052] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 7.172 ms 00:25:00.484 [2024-12-06 18:23:10.787062] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:00.484 [2024-12-06 18:23:10.792718] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:00.484 [2024-12-06 18:23:10.792754] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finish L2P trims 00:25:00.484 [2024-12-06 18:23:10.792767] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 5.626 ms 00:25:00.484 [2024-12-06 18:23:10.792777] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:00.484 [2024-12-06 18:23:10.832109] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:00.484 [2024-12-06 18:23:10.832395] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist NV cache metadata 00:25:00.484 [2024-12-06 18:23:10.832549] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 39.312 ms 00:25:00.484 [2024-12-06 18:23:10.832587] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:00.484 [2024-12-06 18:23:10.853746] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:00.484 [2024-12-06 18:23:10.853918] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist valid map metadata 00:25:00.484 [2024-12-06 18:23:10.853947] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 21.060 ms 00:25:00.484 [2024-12-06 18:23:10.853958] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:00.484 [2024-12-06 18:23:10.854106] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:00.484 [2024-12-06 18:23:10.854120] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist P2L metadata 00:25:00.484 [2024-12-06 18:23:10.854132] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.080 ms 00:25:00.484 [2024-12-06 18:23:10.854153] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:00.484 [2024-12-06 18:23:10.891209] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:00.484 [2024-12-06 18:23:10.891280] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist band info metadata 00:25:00.484 [2024-12-06 18:23:10.891295] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 37.097 ms 00:25:00.484 [2024-12-06 18:23:10.891305] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:00.484 [2024-12-06 18:23:10.928269] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:00.484 [2024-12-06 18:23:10.928315] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist trim metadata 00:25:00.484 [2024-12-06 18:23:10.928329] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 36.957 ms 00:25:00.484 [2024-12-06 18:23:10.928340] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:00.484 [2024-12-06 18:23:10.964802] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:00.484 [2024-12-06 18:23:10.964848] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist superblock 00:25:00.484 [2024-12-06 18:23:10.964862] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 36.459 ms 00:25:00.484 [2024-12-06 18:23:10.964872] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:00.484 [2024-12-06 18:23:11.002218] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:00.484 [2024-12-06 18:23:11.002279] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL clean state 00:25:00.484 [2024-12-06 18:23:11.002296] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 37.312 ms 00:25:00.484 [2024-12-06 18:23:11.002307] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:00.484 [2024-12-06 18:23:11.002403] ftl_debug.c: 165:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Bands validity: 00:25:00.484 [2024-12-06 18:23:11.002423] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 1: 0 / 261120 wr_cnt: 0 state: free 00:25:00.484 [2024-12-06 18:23:11.002436] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 2: 0 / 261120 wr_cnt: 0 state: free 00:25:00.484 [2024-12-06 18:23:11.002447] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 3: 0 / 261120 wr_cnt: 0 state: free 00:25:00.484 [2024-12-06 18:23:11.002459] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 4: 0 / 261120 wr_cnt: 0 state: free 00:25:00.484 [2024-12-06 18:23:11.002470] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 5: 0 / 261120 wr_cnt: 0 state: free 00:25:00.484 [2024-12-06 18:23:11.002480] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 6: 0 / 261120 wr_cnt: 0 state: free 00:25:00.484 [2024-12-06 18:23:11.002492] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 7: 0 / 261120 wr_cnt: 0 state: free 00:25:00.484 [2024-12-06 18:23:11.002502] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 8: 0 / 261120 wr_cnt: 0 state: free 00:25:00.484 [2024-12-06 18:23:11.002513] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 9: 0 / 261120 wr_cnt: 0 state: free 00:25:00.484 [2024-12-06 18:23:11.002523] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 10: 0 / 261120 wr_cnt: 0 state: free 00:25:00.484 [2024-12-06 18:23:11.002534] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 11: 0 / 261120 wr_cnt: 0 state: free 00:25:00.484 [2024-12-06 18:23:11.002545] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 12: 0 / 261120 wr_cnt: 0 state: free 00:25:00.484 [2024-12-06 18:23:11.002556] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 13: 0 / 261120 wr_cnt: 0 state: free 00:25:00.484 [2024-12-06 18:23:11.002567] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 14: 0 / 261120 wr_cnt: 0 state: free 00:25:00.484 [2024-12-06 18:23:11.002577] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 15: 0 / 261120 wr_cnt: 0 state: free 00:25:00.484 [2024-12-06 18:23:11.002588] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 16: 0 / 261120 wr_cnt: 0 state: free 00:25:00.484 [2024-12-06 18:23:11.002598] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 17: 0 / 261120 wr_cnt: 0 state: free 00:25:00.485 [2024-12-06 18:23:11.002608] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 18: 0 / 261120 wr_cnt: 0 state: free 00:25:00.485 [2024-12-06 18:23:11.002619] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 19: 0 / 261120 wr_cnt: 0 state: free 00:25:00.485 [2024-12-06 18:23:11.002630] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 20: 0 / 261120 wr_cnt: 0 state: free 00:25:00.485 [2024-12-06 18:23:11.002641] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 21: 0 / 261120 wr_cnt: 0 state: free 00:25:00.485 [2024-12-06 18:23:11.002651] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 22: 0 / 261120 wr_cnt: 0 state: free 00:25:00.485 [2024-12-06 18:23:11.002661] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 23: 0 / 261120 wr_cnt: 0 state: free 00:25:00.485 [2024-12-06 18:23:11.002671] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 24: 0 / 261120 wr_cnt: 0 state: free 00:25:00.485 [2024-12-06 18:23:11.002681] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 25: 0 / 261120 wr_cnt: 0 state: free 00:25:00.485 [2024-12-06 18:23:11.002692] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 26: 0 / 261120 wr_cnt: 0 state: free 00:25:00.485 [2024-12-06 18:23:11.002702] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 27: 0 / 261120 wr_cnt: 0 state: free 00:25:00.485 [2024-12-06 18:23:11.002713] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 28: 0 / 261120 wr_cnt: 0 state: free 00:25:00.485 [2024-12-06 18:23:11.002724] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 29: 0 / 261120 wr_cnt: 0 state: free 00:25:00.485 [2024-12-06 18:23:11.002734] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 30: 0 / 261120 wr_cnt: 0 state: free 00:25:00.485 [2024-12-06 18:23:11.002746] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 31: 0 / 261120 wr_cnt: 0 state: free 00:25:00.485 [2024-12-06 18:23:11.002757] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 32: 0 / 261120 wr_cnt: 0 state: free 00:25:00.485 [2024-12-06 18:23:11.002768] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 33: 0 / 261120 wr_cnt: 0 state: free 00:25:00.485 [2024-12-06 18:23:11.002778] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 34: 0 / 261120 wr_cnt: 0 state: free 00:25:00.485 [2024-12-06 18:23:11.002790] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 35: 0 / 261120 wr_cnt: 0 state: free 00:25:00.485 [2024-12-06 18:23:11.002800] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 36: 0 / 261120 wr_cnt: 0 state: free 00:25:00.485 [2024-12-06 18:23:11.002811] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 37: 0 / 261120 wr_cnt: 0 state: free 00:25:00.485 [2024-12-06 18:23:11.002821] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 38: 0 / 261120 wr_cnt: 0 state: free 00:25:00.485 [2024-12-06 18:23:11.002832] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 39: 0 / 261120 wr_cnt: 0 state: free 00:25:00.485 [2024-12-06 18:23:11.002843] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 40: 0 / 261120 wr_cnt: 0 state: free 00:25:00.485 [2024-12-06 18:23:11.002854] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 41: 0 / 261120 wr_cnt: 0 state: free 00:25:00.485 [2024-12-06 18:23:11.002864] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 42: 0 / 261120 wr_cnt: 0 state: free 00:25:00.485 [2024-12-06 18:23:11.002875] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 43: 0 / 261120 wr_cnt: 0 state: free 00:25:00.485 [2024-12-06 18:23:11.002885] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 44: 0 / 261120 wr_cnt: 0 state: free 00:25:00.485 [2024-12-06 18:23:11.002896] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 45: 0 / 261120 wr_cnt: 0 state: free 00:25:00.485 [2024-12-06 18:23:11.002906] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 46: 0 / 261120 wr_cnt: 0 state: free 00:25:00.485 [2024-12-06 18:23:11.002918] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 47: 0 / 261120 wr_cnt: 0 state: free 00:25:00.485 [2024-12-06 18:23:11.002928] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 48: 0 / 261120 wr_cnt: 0 state: free 00:25:00.485 [2024-12-06 18:23:11.002938] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 49: 0 / 261120 wr_cnt: 0 state: free 00:25:00.485 [2024-12-06 18:23:11.002949] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 50: 0 / 261120 wr_cnt: 0 state: free 00:25:00.485 [2024-12-06 18:23:11.002959] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 51: 0 / 261120 wr_cnt: 0 state: free 00:25:00.485 [2024-12-06 18:23:11.002970] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 52: 0 / 261120 wr_cnt: 0 state: free 00:25:00.485 [2024-12-06 18:23:11.002980] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 53: 0 / 261120 wr_cnt: 0 state: free 00:25:00.485 [2024-12-06 18:23:11.002990] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 54: 0 / 261120 wr_cnt: 0 state: free 00:25:00.485 [2024-12-06 18:23:11.003000] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 55: 0 / 261120 wr_cnt: 0 state: free 00:25:00.485 [2024-12-06 18:23:11.003011] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 56: 0 / 261120 wr_cnt: 0 state: free 00:25:00.485 [2024-12-06 18:23:11.003021] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 57: 0 / 261120 wr_cnt: 0 state: free 00:25:00.485 [2024-12-06 18:23:11.003031] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 58: 0 / 261120 wr_cnt: 0 state: free 00:25:00.485 [2024-12-06 18:23:11.003041] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 59: 0 / 261120 wr_cnt: 0 state: free 00:25:00.485 [2024-12-06 18:23:11.003052] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 60: 0 / 261120 wr_cnt: 0 state: free 00:25:00.485 [2024-12-06 18:23:11.003062] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 61: 0 / 261120 wr_cnt: 0 state: free 00:25:00.485 [2024-12-06 18:23:11.003073] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 62: 0 / 261120 wr_cnt: 0 state: free 00:25:00.485 [2024-12-06 18:23:11.003084] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 63: 0 / 261120 wr_cnt: 0 state: free 00:25:00.485 [2024-12-06 18:23:11.003095] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 64: 0 / 261120 wr_cnt: 0 state: free 00:25:00.485 [2024-12-06 18:23:11.003106] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 65: 0 / 261120 wr_cnt: 0 state: free 00:25:00.485 [2024-12-06 18:23:11.003117] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 66: 0 / 261120 wr_cnt: 0 state: free 00:25:00.485 [2024-12-06 18:23:11.003128] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 67: 0 / 261120 wr_cnt: 0 state: free 00:25:00.485 [2024-12-06 18:23:11.003139] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 68: 0 / 261120 wr_cnt: 0 state: free 00:25:00.485 [2024-12-06 18:23:11.003150] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 69: 0 / 261120 wr_cnt: 0 state: free 00:25:00.485 [2024-12-06 18:23:11.003160] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 70: 0 / 261120 wr_cnt: 0 state: free 00:25:00.485 [2024-12-06 18:23:11.003171] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 71: 0 / 261120 wr_cnt: 0 state: free 00:25:00.485 [2024-12-06 18:23:11.003181] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 72: 0 / 261120 wr_cnt: 0 state: free 00:25:00.485 [2024-12-06 18:23:11.003191] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 73: 0 / 261120 wr_cnt: 0 state: free 00:25:00.485 [2024-12-06 18:23:11.003201] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 74: 0 / 261120 wr_cnt: 0 state: free 00:25:00.485 [2024-12-06 18:23:11.003211] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 75: 0 / 261120 wr_cnt: 0 state: free 00:25:00.485 [2024-12-06 18:23:11.003221] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 76: 0 / 261120 wr_cnt: 0 state: free 00:25:00.485 [2024-12-06 18:23:11.003231] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 77: 0 / 261120 wr_cnt: 0 state: free 00:25:00.485 [2024-12-06 18:23:11.003242] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 78: 0 / 261120 wr_cnt: 0 state: free 00:25:00.485 [2024-12-06 18:23:11.003252] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 79: 0 / 261120 wr_cnt: 0 state: free 00:25:00.485 [2024-12-06 18:23:11.003271] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 80: 0 / 261120 wr_cnt: 0 state: free 00:25:00.485 [2024-12-06 18:23:11.003283] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 81: 0 / 261120 wr_cnt: 0 state: free 00:25:00.485 [2024-12-06 18:23:11.003293] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 82: 0 / 261120 wr_cnt: 0 state: free 00:25:00.485 [2024-12-06 18:23:11.003304] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 83: 0 / 261120 wr_cnt: 0 state: free 00:25:00.485 [2024-12-06 18:23:11.003314] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 84: 0 / 261120 wr_cnt: 0 state: free 00:25:00.485 [2024-12-06 18:23:11.003324] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 85: 0 / 261120 wr_cnt: 0 state: free 00:25:00.486 [2024-12-06 18:23:11.003335] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 86: 0 / 261120 wr_cnt: 0 state: free 00:25:00.486 [2024-12-06 18:23:11.003345] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 87: 0 / 261120 wr_cnt: 0 state: free 00:25:00.486 [2024-12-06 18:23:11.003356] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 88: 0 / 261120 wr_cnt: 0 state: free 00:25:00.486 [2024-12-06 18:23:11.003366] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 89: 0 / 261120 wr_cnt: 0 state: free 00:25:00.486 [2024-12-06 18:23:11.003377] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 90: 0 / 261120 wr_cnt: 0 state: free 00:25:00.486 [2024-12-06 18:23:11.003388] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 91: 0 / 261120 wr_cnt: 0 state: free 00:25:00.486 [2024-12-06 18:23:11.003398] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 92: 0 / 261120 wr_cnt: 0 state: free 00:25:00.486 [2024-12-06 18:23:11.003409] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 93: 0 / 261120 wr_cnt: 0 state: free 00:25:00.486 [2024-12-06 18:23:11.003420] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 94: 0 / 261120 wr_cnt: 0 state: free 00:25:00.486 [2024-12-06 18:23:11.003445] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 95: 0 / 261120 wr_cnt: 0 state: free 00:25:00.486 [2024-12-06 18:23:11.003456] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 96: 0 / 261120 wr_cnt: 0 state: free 00:25:00.486 [2024-12-06 18:23:11.003467] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 97: 0 / 261120 wr_cnt: 0 state: free 00:25:00.486 [2024-12-06 18:23:11.003477] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 98: 0 / 261120 wr_cnt: 0 state: free 00:25:00.486 [2024-12-06 18:23:11.003488] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 99: 0 / 261120 wr_cnt: 0 state: free 00:25:00.486 [2024-12-06 18:23:11.003499] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 100: 0 / 261120 wr_cnt: 0 state: free 00:25:00.486 [2024-12-06 18:23:11.003517] ftl_debug.c: 211:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] 00:25:00.486 [2024-12-06 18:23:11.003527] ftl_debug.c: 212:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] device UUID: 0182d045-795a-443b-ad13-478c5d3e8b79 00:25:00.486 [2024-12-06 18:23:11.003538] ftl_debug.c: 213:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total valid LBAs: 0 00:25:00.486 [2024-12-06 18:23:11.003548] ftl_debug.c: 214:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total writes: 960 00:25:00.486 [2024-12-06 18:23:11.003557] ftl_debug.c: 215:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] user writes: 0 00:25:00.486 [2024-12-06 18:23:11.003568] ftl_debug.c: 216:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] WAF: inf 00:25:00.486 [2024-12-06 18:23:11.003578] ftl_debug.c: 218:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] limits: 00:25:00.486 [2024-12-06 18:23:11.003588] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] crit: 0 00:25:00.486 [2024-12-06 18:23:11.003598] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] high: 0 00:25:00.486 [2024-12-06 18:23:11.003607] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] low: 0 00:25:00.486 [2024-12-06 18:23:11.003616] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] start: 0 00:25:00.486 [2024-12-06 18:23:11.003626] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:00.486 [2024-12-06 18:23:11.003640] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Dump statistics 00:25:00.486 [2024-12-06 18:23:11.003651] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.226 ms 00:25:00.486 [2024-12-06 18:23:11.003661] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:00.486 [2024-12-06 18:23:11.023863] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:00.486 [2024-12-06 18:23:11.023906] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize L2P 00:25:00.486 [2024-12-06 18:23:11.023920] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 20.211 ms 00:25:00.486 [2024-12-06 18:23:11.023931] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:00.486 [2024-12-06 18:23:11.024593] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:00.486 [2024-12-06 18:23:11.024611] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize P2L checkpointing 00:25:00.486 [2024-12-06 18:23:11.024623] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.613 ms 00:25:00.486 [2024-12-06 18:23:11.024632] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:00.746 [2024-12-06 18:23:11.079307] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:25:00.746 [2024-12-06 18:23:11.079495] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:25:00.746 [2024-12-06 18:23:11.079519] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:25:00.746 [2024-12-06 18:23:11.079530] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:00.746 [2024-12-06 18:23:11.079670] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:25:00.746 [2024-12-06 18:23:11.079683] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:25:00.746 [2024-12-06 18:23:11.079694] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:25:00.746 [2024-12-06 18:23:11.079704] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:00.746 [2024-12-06 18:23:11.079759] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:25:00.746 [2024-12-06 18:23:11.079772] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:25:00.746 [2024-12-06 18:23:11.079782] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:25:00.746 [2024-12-06 18:23:11.079792] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:00.746 [2024-12-06 18:23:11.079812] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:25:00.746 [2024-12-06 18:23:11.079828] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:25:00.746 [2024-12-06 18:23:11.079838] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:25:00.746 [2024-12-06 18:23:11.079848] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:00.746 [2024-12-06 18:23:11.202799] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:25:00.746 [2024-12-06 18:23:11.203025] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:25:00.746 [2024-12-06 18:23:11.203049] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:25:00.746 [2024-12-06 18:23:11.203060] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:00.746 [2024-12-06 18:23:11.303887] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:25:00.746 [2024-12-06 18:23:11.303952] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:25:00.746 [2024-12-06 18:23:11.303966] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:25:00.746 [2024-12-06 18:23:11.303992] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:00.746 [2024-12-06 18:23:11.304092] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:25:00.746 [2024-12-06 18:23:11.304105] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:25:00.746 [2024-12-06 18:23:11.304115] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:25:00.746 [2024-12-06 18:23:11.304126] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:00.746 [2024-12-06 18:23:11.304154] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:25:00.746 [2024-12-06 18:23:11.304165] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:25:00.746 [2024-12-06 18:23:11.304181] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:25:00.746 [2024-12-06 18:23:11.304192] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:00.746 [2024-12-06 18:23:11.304339] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:25:00.746 [2024-12-06 18:23:11.304354] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:25:00.746 [2024-12-06 18:23:11.304365] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:25:00.746 [2024-12-06 18:23:11.304375] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:00.746 [2024-12-06 18:23:11.304415] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:25:00.746 [2024-12-06 18:23:11.304427] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize superblock 00:25:00.746 [2024-12-06 18:23:11.304438] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:25:00.746 [2024-12-06 18:23:11.304452] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:00.746 [2024-12-06 18:23:11.304492] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:25:00.746 [2024-12-06 18:23:11.304503] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:25:00.746 [2024-12-06 18:23:11.304514] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:25:00.746 [2024-12-06 18:23:11.304524] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:00.746 [2024-12-06 18:23:11.304569] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:25:00.746 [2024-12-06 18:23:11.304581] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:25:00.746 [2024-12-06 18:23:11.304595] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:25:00.746 [2024-12-06 18:23:11.304604] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:00.746 [2024-12-06 18:23:11.304738] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL shutdown', duration = 532.683 ms, result 0 00:25:02.123 00:25:02.123 00:25:02.123 18:23:12 ftl.ftl_trim -- ftl/trim.sh@72 -- # svcpid=78490 00:25:02.123 18:23:12 ftl.ftl_trim -- ftl/trim.sh@71 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -L ftl_init 00:25:02.123 18:23:12 ftl.ftl_trim -- ftl/trim.sh@73 -- # waitforlisten 78490 00:25:02.123 18:23:12 ftl.ftl_trim -- common/autotest_common.sh@835 -- # '[' -z 78490 ']' 00:25:02.123 18:23:12 ftl.ftl_trim -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:25:02.123 18:23:12 ftl.ftl_trim -- common/autotest_common.sh@840 -- # local max_retries=100 00:25:02.123 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:25:02.123 18:23:12 ftl.ftl_trim -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:25:02.123 18:23:12 ftl.ftl_trim -- common/autotest_common.sh@844 -- # xtrace_disable 00:25:02.123 18:23:12 ftl.ftl_trim -- common/autotest_common.sh@10 -- # set +x 00:25:02.123 [2024-12-06 18:23:12.647203] Starting SPDK v25.01-pre git sha1 1148849d6 / DPDK 24.03.0 initialization... 00:25:02.123 [2024-12-06 18:23:12.647355] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid78490 ] 00:25:02.382 [2024-12-06 18:23:12.826061] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:25:02.382 [2024-12-06 18:23:12.942431] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:25:03.321 18:23:13 ftl.ftl_trim -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:25:03.321 18:23:13 ftl.ftl_trim -- common/autotest_common.sh@868 -- # return 0 00:25:03.321 18:23:13 ftl.ftl_trim -- ftl/trim.sh@75 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py load_config 00:25:03.580 [2024-12-06 18:23:14.009814] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:25:03.580 [2024-12-06 18:23:14.009890] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:25:03.840 [2024-12-06 18:23:14.188891] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:03.840 [2024-12-06 18:23:14.188961] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Check configuration 00:25:03.840 [2024-12-06 18:23:14.188980] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.004 ms 00:25:03.840 [2024-12-06 18:23:14.189007] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:03.840 [2024-12-06 18:23:14.192095] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:03.840 [2024-12-06 18:23:14.192245] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:25:03.840 [2024-12-06 18:23:14.192301] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.070 ms 00:25:03.840 [2024-12-06 18:23:14.192313] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:03.840 [2024-12-06 18:23:14.192457] mngt/ftl_mngt_bdev.c: 196:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using nvc0n1p0 as write buffer cache 00:25:03.840 [2024-12-06 18:23:14.193395] mngt/ftl_mngt_bdev.c: 236:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using bdev as NV Cache device 00:25:03.840 [2024-12-06 18:23:14.193431] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:03.840 [2024-12-06 18:23:14.193442] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:25:03.841 [2024-12-06 18:23:14.193455] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.994 ms 00:25:03.841 [2024-12-06 18:23:14.193465] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:03.841 [2024-12-06 18:23:14.194921] mngt/ftl_mngt_md.c: 455:ftl_mngt_load_sb: *NOTICE*: [FTL][ftl0] SHM: clean 0, shm_clean 0 00:25:03.841 [2024-12-06 18:23:14.213704] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:03.841 [2024-12-06 18:23:14.213763] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Load super block 00:25:03.841 [2024-12-06 18:23:14.213779] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 18.817 ms 00:25:03.841 [2024-12-06 18:23:14.213791] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:03.841 [2024-12-06 18:23:14.213890] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:03.841 [2024-12-06 18:23:14.213907] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Validate super block 00:25:03.841 [2024-12-06 18:23:14.213919] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.024 ms 00:25:03.841 [2024-12-06 18:23:14.213931] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:03.841 [2024-12-06 18:23:14.220633] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:03.841 [2024-12-06 18:23:14.220674] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:25:03.841 [2024-12-06 18:23:14.220686] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 6.664 ms 00:25:03.841 [2024-12-06 18:23:14.220699] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:03.841 [2024-12-06 18:23:14.220811] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:03.841 [2024-12-06 18:23:14.220828] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:25:03.841 [2024-12-06 18:23:14.220839] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.074 ms 00:25:03.841 [2024-12-06 18:23:14.220856] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:03.841 [2024-12-06 18:23:14.220885] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:03.841 [2024-12-06 18:23:14.220899] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Register IO device 00:25:03.841 [2024-12-06 18:23:14.220910] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.010 ms 00:25:03.841 [2024-12-06 18:23:14.220922] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:03.841 [2024-12-06 18:23:14.220947] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on ftl_core_thread 00:25:03.841 [2024-12-06 18:23:14.225638] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:03.841 [2024-12-06 18:23:14.225671] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:25:03.841 [2024-12-06 18:23:14.225687] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.702 ms 00:25:03.841 [2024-12-06 18:23:14.225697] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:03.841 [2024-12-06 18:23:14.225775] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:03.841 [2024-12-06 18:23:14.225788] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Decorate bands 00:25:03.841 [2024-12-06 18:23:14.225801] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.010 ms 00:25:03.841 [2024-12-06 18:23:14.225814] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:03.841 [2024-12-06 18:23:14.225838] ftl_layout.c: 613:ftl_layout_setup: *NOTICE*: [FTL][ftl0] FTL layout setup mode 0 00:25:03.841 [2024-12-06 18:23:14.225861] upgrade/ftl_sb_v5.c: 278:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob load 0x150 bytes 00:25:03.841 [2024-12-06 18:23:14.225908] upgrade/ftl_sb_v5.c: 287:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] base layout blob load 0x48 bytes 00:25:03.841 [2024-12-06 18:23:14.225928] upgrade/ftl_sb_v5.c: 294:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] layout blob load 0x190 bytes 00:25:03.841 [2024-12-06 18:23:14.226019] upgrade/ftl_sb_v5.c: 92:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob store 0x150 bytes 00:25:03.841 [2024-12-06 18:23:14.226033] upgrade/ftl_sb_v5.c: 101:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] base layout blob store 0x48 bytes 00:25:03.841 [2024-12-06 18:23:14.226051] upgrade/ftl_sb_v5.c: 109:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] layout blob store 0x190 bytes 00:25:03.841 [2024-12-06 18:23:14.226064] ftl_layout.c: 685:ftl_layout_setup: *NOTICE*: [FTL][ftl0] Base device capacity: 103424.00 MiB 00:25:03.841 [2024-12-06 18:23:14.226079] ftl_layout.c: 687:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache device capacity: 5171.00 MiB 00:25:03.841 [2024-12-06 18:23:14.226091] ftl_layout.c: 689:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P entries: 23592960 00:25:03.841 [2024-12-06 18:23:14.226103] ftl_layout.c: 690:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P address size: 4 00:25:03.841 [2024-12-06 18:23:14.226113] ftl_layout.c: 691:ftl_layout_setup: *NOTICE*: [FTL][ftl0] P2L checkpoint pages: 2048 00:25:03.841 [2024-12-06 18:23:14.226128] ftl_layout.c: 692:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache chunk count 5 00:25:03.841 [2024-12-06 18:23:14.226138] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:03.841 [2024-12-06 18:23:14.226151] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize layout 00:25:03.841 [2024-12-06 18:23:14.226161] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.305 ms 00:25:03.841 [2024-12-06 18:23:14.226173] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:03.841 [2024-12-06 18:23:14.226250] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:03.841 [2024-12-06 18:23:14.226279] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Verify layout 00:25:03.841 [2024-12-06 18:23:14.226290] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.055 ms 00:25:03.841 [2024-12-06 18:23:14.226303] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:03.841 [2024-12-06 18:23:14.226397] ftl_layout.c: 768:ftl_layout_dump: *NOTICE*: [FTL][ftl0] NV cache layout: 00:25:03.841 [2024-12-06 18:23:14.226413] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb 00:25:03.841 [2024-12-06 18:23:14.226424] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:25:03.841 [2024-12-06 18:23:14.226437] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:25:03.841 [2024-12-06 18:23:14.226447] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region l2p 00:25:03.841 [2024-12-06 18:23:14.226461] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.12 MiB 00:25:03.841 [2024-12-06 18:23:14.226470] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 90.00 MiB 00:25:03.841 [2024-12-06 18:23:14.226484] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md 00:25:03.841 [2024-12-06 18:23:14.226494] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 90.12 MiB 00:25:03.841 [2024-12-06 18:23:14.226505] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:25:03.841 [2024-12-06 18:23:14.226515] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md_mirror 00:25:03.841 [2024-12-06 18:23:14.226527] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 90.62 MiB 00:25:03.841 [2024-12-06 18:23:14.226537] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:25:03.841 [2024-12-06 18:23:14.226549] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md 00:25:03.841 [2024-12-06 18:23:14.226558] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.88 MiB 00:25:03.841 [2024-12-06 18:23:14.226569] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:25:03.841 [2024-12-06 18:23:14.226578] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md_mirror 00:25:03.841 [2024-12-06 18:23:14.226590] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 124.00 MiB 00:25:03.841 [2024-12-06 18:23:14.226608] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:25:03.841 [2024-12-06 18:23:14.226620] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l0 00:25:03.841 [2024-12-06 18:23:14.226629] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 91.12 MiB 00:25:03.841 [2024-12-06 18:23:14.226640] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:25:03.841 [2024-12-06 18:23:14.226649] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l1 00:25:03.841 [2024-12-06 18:23:14.226664] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 99.12 MiB 00:25:03.841 [2024-12-06 18:23:14.226673] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:25:03.841 [2024-12-06 18:23:14.226685] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l2 00:25:03.841 [2024-12-06 18:23:14.226694] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 107.12 MiB 00:25:03.841 [2024-12-06 18:23:14.226705] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:25:03.841 [2024-12-06 18:23:14.226715] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l3 00:25:03.841 [2024-12-06 18:23:14.226728] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 115.12 MiB 00:25:03.841 [2024-12-06 18:23:14.226736] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:25:03.841 [2024-12-06 18:23:14.226748] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md 00:25:03.841 [2024-12-06 18:23:14.226756] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.12 MiB 00:25:03.841 [2024-12-06 18:23:14.226767] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:25:03.841 [2024-12-06 18:23:14.226777] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md_mirror 00:25:03.841 [2024-12-06 18:23:14.226788] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.38 MiB 00:25:03.841 [2024-12-06 18:23:14.226797] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:25:03.841 [2024-12-06 18:23:14.226808] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log 00:25:03.841 [2024-12-06 18:23:14.226817] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.62 MiB 00:25:03.841 [2024-12-06 18:23:14.226832] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:25:03.841 [2024-12-06 18:23:14.226841] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log_mirror 00:25:03.841 [2024-12-06 18:23:14.226852] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.75 MiB 00:25:03.841 [2024-12-06 18:23:14.226861] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:25:03.841 [2024-12-06 18:23:14.226872] ftl_layout.c: 775:ftl_layout_dump: *NOTICE*: [FTL][ftl0] Base device layout: 00:25:03.841 [2024-12-06 18:23:14.226885] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb_mirror 00:25:03.841 [2024-12-06 18:23:14.226897] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:25:03.841 [2024-12-06 18:23:14.226907] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:25:03.841 [2024-12-06 18:23:14.226919] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region vmap 00:25:03.841 [2024-12-06 18:23:14.226928] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 102400.25 MiB 00:25:03.841 [2024-12-06 18:23:14.226940] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 3.38 MiB 00:25:03.841 [2024-12-06 18:23:14.226949] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region data_btm 00:25:03.842 [2024-12-06 18:23:14.226960] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.25 MiB 00:25:03.842 [2024-12-06 18:23:14.226969] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 102400.00 MiB 00:25:03.842 [2024-12-06 18:23:14.226982] upgrade/ftl_sb_v5.c: 408:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - nvc: 00:25:03.842 [2024-12-06 18:23:14.226994] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x0 ver:5 blk_offs:0x0 blk_sz:0x20 00:25:03.842 [2024-12-06 18:23:14.227011] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x2 ver:0 blk_offs:0x20 blk_sz:0x5a00 00:25:03.842 [2024-12-06 18:23:14.227022] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x3 ver:2 blk_offs:0x5a20 blk_sz:0x80 00:25:03.842 [2024-12-06 18:23:14.227035] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x4 ver:2 blk_offs:0x5aa0 blk_sz:0x80 00:25:03.842 [2024-12-06 18:23:14.227045] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xa ver:2 blk_offs:0x5b20 blk_sz:0x800 00:25:03.842 [2024-12-06 18:23:14.227057] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xb ver:2 blk_offs:0x6320 blk_sz:0x800 00:25:03.842 [2024-12-06 18:23:14.227068] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xc ver:2 blk_offs:0x6b20 blk_sz:0x800 00:25:03.842 [2024-12-06 18:23:14.227081] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xd ver:2 blk_offs:0x7320 blk_sz:0x800 00:25:03.842 [2024-12-06 18:23:14.227091] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xe ver:0 blk_offs:0x7b20 blk_sz:0x40 00:25:03.842 [2024-12-06 18:23:14.227103] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xf ver:0 blk_offs:0x7b60 blk_sz:0x40 00:25:03.842 [2024-12-06 18:23:14.227114] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x10 ver:1 blk_offs:0x7ba0 blk_sz:0x20 00:25:03.842 [2024-12-06 18:23:14.227127] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x11 ver:1 blk_offs:0x7bc0 blk_sz:0x20 00:25:03.842 [2024-12-06 18:23:14.227137] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x6 ver:2 blk_offs:0x7be0 blk_sz:0x20 00:25:03.842 [2024-12-06 18:23:14.227150] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x7 ver:2 blk_offs:0x7c00 blk_sz:0x20 00:25:03.842 [2024-12-06 18:23:14.227160] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x7c20 blk_sz:0x13b6e0 00:25:03.842 [2024-12-06 18:23:14.227172] upgrade/ftl_sb_v5.c: 422:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - base dev: 00:25:03.842 [2024-12-06 18:23:14.227183] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x1 ver:5 blk_offs:0x0 blk_sz:0x20 00:25:03.842 [2024-12-06 18:23:14.227199] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x20 blk_sz:0x20 00:25:03.842 [2024-12-06 18:23:14.227210] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x9 ver:0 blk_offs:0x40 blk_sz:0x1900000 00:25:03.842 [2024-12-06 18:23:14.227222] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x5 ver:0 blk_offs:0x1900040 blk_sz:0x360 00:25:03.842 [2024-12-06 18:23:14.227233] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x19003a0 blk_sz:0x3fc60 00:25:03.842 [2024-12-06 18:23:14.227246] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:03.842 [2024-12-06 18:23:14.227257] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Layout upgrade 00:25:03.842 [2024-12-06 18:23:14.227279] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.910 ms 00:25:03.842 [2024-12-06 18:23:14.227292] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:03.842 [2024-12-06 18:23:14.267192] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:03.842 [2024-12-06 18:23:14.267237] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:25:03.842 [2024-12-06 18:23:14.267256] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 39.901 ms 00:25:03.842 [2024-12-06 18:23:14.267284] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:03.842 [2024-12-06 18:23:14.267446] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:03.842 [2024-12-06 18:23:14.267459] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize band addresses 00:25:03.842 [2024-12-06 18:23:14.267473] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.066 ms 00:25:03.842 [2024-12-06 18:23:14.267484] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:03.842 [2024-12-06 18:23:14.312824] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:03.842 [2024-12-06 18:23:14.312876] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:25:03.842 [2024-12-06 18:23:14.312894] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 45.383 ms 00:25:03.842 [2024-12-06 18:23:14.312905] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:03.842 [2024-12-06 18:23:14.313017] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:03.842 [2024-12-06 18:23:14.313030] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:25:03.842 [2024-12-06 18:23:14.313044] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.003 ms 00:25:03.842 [2024-12-06 18:23:14.313054] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:03.842 [2024-12-06 18:23:14.313499] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:03.842 [2024-12-06 18:23:14.313516] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:25:03.842 [2024-12-06 18:23:14.313530] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.423 ms 00:25:03.842 [2024-12-06 18:23:14.313539] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:03.842 [2024-12-06 18:23:14.313661] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:03.842 [2024-12-06 18:23:14.313674] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:25:03.842 [2024-12-06 18:23:14.313687] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.095 ms 00:25:03.842 [2024-12-06 18:23:14.313697] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:03.842 [2024-12-06 18:23:14.333215] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:03.842 [2024-12-06 18:23:14.333259] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:25:03.842 [2024-12-06 18:23:14.333291] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 19.521 ms 00:25:03.842 [2024-12-06 18:23:14.333302] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:03.842 [2024-12-06 18:23:14.368423] ftl_nv_cache.c:1772:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: full chunks = 1, empty chunks = 3 00:25:03.842 [2024-12-06 18:23:14.368490] ftl_nv_cache.c:1776:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: state loaded successfully 00:25:03.842 [2024-12-06 18:23:14.368514] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:03.842 [2024-12-06 18:23:14.368526] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore NV cache metadata 00:25:03.842 [2024-12-06 18:23:14.368543] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 35.130 ms 00:25:03.842 [2024-12-06 18:23:14.368565] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:03.842 [2024-12-06 18:23:14.399486] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:03.842 [2024-12-06 18:23:14.399558] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore valid map metadata 00:25:03.842 [2024-12-06 18:23:14.399577] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 30.819 ms 00:25:03.842 [2024-12-06 18:23:14.399588] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:04.102 [2024-12-06 18:23:14.418190] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:04.102 [2024-12-06 18:23:14.418233] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore band info metadata 00:25:04.102 [2024-12-06 18:23:14.418252] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 18.492 ms 00:25:04.102 [2024-12-06 18:23:14.418276] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:04.102 [2024-12-06 18:23:14.436348] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:04.102 [2024-12-06 18:23:14.436513] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore trim metadata 00:25:04.102 [2024-12-06 18:23:14.436539] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 17.973 ms 00:25:04.102 [2024-12-06 18:23:14.436549] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:04.102 [2024-12-06 18:23:14.437286] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:04.102 [2024-12-06 18:23:14.437311] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize P2L checkpointing 00:25:04.102 [2024-12-06 18:23:14.437326] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.608 ms 00:25:04.102 [2024-12-06 18:23:14.437336] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:04.102 [2024-12-06 18:23:14.521915] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:04.102 [2024-12-06 18:23:14.521974] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore P2L checkpoints 00:25:04.102 [2024-12-06 18:23:14.521993] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 84.682 ms 00:25:04.102 [2024-12-06 18:23:14.522005] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:04.102 [2024-12-06 18:23:14.533460] ftl_l2p_cache.c: 458:ftl_l2p_cache_init: *NOTICE*: l2p maximum resident size is: 59 (of 60) MiB 00:25:04.102 [2024-12-06 18:23:14.549806] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:04.102 [2024-12-06 18:23:14.549867] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize L2P 00:25:04.102 [2024-12-06 18:23:14.549903] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 27.704 ms 00:25:04.102 [2024-12-06 18:23:14.549916] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:04.102 [2024-12-06 18:23:14.550021] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:04.102 [2024-12-06 18:23:14.550037] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore L2P 00:25:04.102 [2024-12-06 18:23:14.550049] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.006 ms 00:25:04.102 [2024-12-06 18:23:14.550061] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:04.102 [2024-12-06 18:23:14.550112] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:04.102 [2024-12-06 18:23:14.550126] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize band initialization 00:25:04.102 [2024-12-06 18:23:14.550137] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.031 ms 00:25:04.102 [2024-12-06 18:23:14.550154] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:04.102 [2024-12-06 18:23:14.550178] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:04.102 [2024-12-06 18:23:14.550192] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Start core poller 00:25:04.102 [2024-12-06 18:23:14.550202] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.004 ms 00:25:04.102 [2024-12-06 18:23:14.550215] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:04.102 [2024-12-06 18:23:14.550254] mngt/ftl_mngt_self_test.c: 208:ftl_mngt_self_test: *NOTICE*: [FTL][ftl0] Self test skipped 00:25:04.102 [2024-12-06 18:23:14.550271] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:04.102 [2024-12-06 18:23:14.550307] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Self test on startup 00:25:04.102 [2024-12-06 18:23:14.550321] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.013 ms 00:25:04.102 [2024-12-06 18:23:14.550331] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:04.102 [2024-12-06 18:23:14.588501] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:04.102 [2024-12-06 18:23:14.588568] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL dirty state 00:25:04.102 [2024-12-06 18:23:14.588600] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 38.194 ms 00:25:04.102 [2024-12-06 18:23:14.588611] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:04.102 [2024-12-06 18:23:14.588775] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:04.102 [2024-12-06 18:23:14.588790] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize initialization 00:25:04.102 [2024-12-06 18:23:14.588806] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.036 ms 00:25:04.102 [2024-12-06 18:23:14.588822] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:04.102 [2024-12-06 18:23:14.589837] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:25:04.102 [2024-12-06 18:23:14.594792] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL startup', duration = 401.264 ms, result 0 00:25:04.102 [2024-12-06 18:23:14.596017] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:25:04.102 Some configs were skipped because the RPC state that can call them passed over. 00:25:04.102 18:23:14 ftl.ftl_trim -- ftl/trim.sh@78 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_ftl_unmap -b ftl0 --lba 0 --num_blocks 1024 00:25:04.361 [2024-12-06 18:23:14.839851] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:04.361 [2024-12-06 18:23:14.840117] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Process trim 00:25:04.361 [2024-12-06 18:23:14.840143] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.721 ms 00:25:04.361 [2024-12-06 18:23:14.840158] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:04.361 [2024-12-06 18:23:14.840208] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL trim', duration = 2.082 ms, result 0 00:25:04.361 true 00:25:04.362 18:23:14 ftl.ftl_trim -- ftl/trim.sh@79 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_ftl_unmap -b ftl0 --lba 23591936 --num_blocks 1024 00:25:04.621 [2024-12-06 18:23:15.047276] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:04.621 [2024-12-06 18:23:15.047532] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Process trim 00:25:04.621 [2024-12-06 18:23:15.047622] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.229 ms 00:25:04.621 [2024-12-06 18:23:15.047663] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:04.621 [2024-12-06 18:23:15.047753] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL trim', duration = 1.723 ms, result 0 00:25:04.621 true 00:25:04.621 18:23:15 ftl.ftl_trim -- ftl/trim.sh@81 -- # killprocess 78490 00:25:04.621 18:23:15 ftl.ftl_trim -- common/autotest_common.sh@954 -- # '[' -z 78490 ']' 00:25:04.621 18:23:15 ftl.ftl_trim -- common/autotest_common.sh@958 -- # kill -0 78490 00:25:04.621 18:23:15 ftl.ftl_trim -- common/autotest_common.sh@959 -- # uname 00:25:04.621 18:23:15 ftl.ftl_trim -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:25:04.621 18:23:15 ftl.ftl_trim -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 78490 00:25:04.621 killing process with pid 78490 00:25:04.621 18:23:15 ftl.ftl_trim -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:25:04.621 18:23:15 ftl.ftl_trim -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:25:04.621 18:23:15 ftl.ftl_trim -- common/autotest_common.sh@972 -- # echo 'killing process with pid 78490' 00:25:04.621 18:23:15 ftl.ftl_trim -- common/autotest_common.sh@973 -- # kill 78490 00:25:04.621 18:23:15 ftl.ftl_trim -- common/autotest_common.sh@978 -- # wait 78490 00:25:06.000 [2024-12-06 18:23:16.243779] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:06.000 [2024-12-06 18:23:16.243847] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinit core IO channel 00:25:06.000 [2024-12-06 18:23:16.243863] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.003 ms 00:25:06.000 [2024-12-06 18:23:16.243892] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:06.000 [2024-12-06 18:23:16.243918] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on ftl_core_thread 00:25:06.000 [2024-12-06 18:23:16.248127] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:06.000 [2024-12-06 18:23:16.248162] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Unregister IO device 00:25:06.000 [2024-12-06 18:23:16.248181] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.195 ms 00:25:06.000 [2024-12-06 18:23:16.248191] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:06.000 [2024-12-06 18:23:16.248474] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:06.000 [2024-12-06 18:23:16.248489] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Stop core poller 00:25:06.000 [2024-12-06 18:23:16.248502] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.221 ms 00:25:06.000 [2024-12-06 18:23:16.248512] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:06.000 [2024-12-06 18:23:16.251853] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:06.000 [2024-12-06 18:23:16.251890] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist L2P 00:25:06.000 [2024-12-06 18:23:16.251908] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.322 ms 00:25:06.000 [2024-12-06 18:23:16.251918] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:06.000 [2024-12-06 18:23:16.257588] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:06.000 [2024-12-06 18:23:16.257624] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finish L2P trims 00:25:06.000 [2024-12-06 18:23:16.257641] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 5.636 ms 00:25:06.000 [2024-12-06 18:23:16.257651] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:06.000 [2024-12-06 18:23:16.272840] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:06.000 [2024-12-06 18:23:16.272887] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist NV cache metadata 00:25:06.000 [2024-12-06 18:23:16.272906] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 15.156 ms 00:25:06.000 [2024-12-06 18:23:16.272916] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:06.000 [2024-12-06 18:23:16.283645] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:06.000 [2024-12-06 18:23:16.283687] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist valid map metadata 00:25:06.000 [2024-12-06 18:23:16.283703] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 10.671 ms 00:25:06.000 [2024-12-06 18:23:16.283713] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:06.000 [2024-12-06 18:23:16.283861] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:06.000 [2024-12-06 18:23:16.283874] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist P2L metadata 00:25:06.000 [2024-12-06 18:23:16.283888] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.078 ms 00:25:06.000 [2024-12-06 18:23:16.283897] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:06.000 [2024-12-06 18:23:16.299497] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:06.000 [2024-12-06 18:23:16.299534] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist band info metadata 00:25:06.000 [2024-12-06 18:23:16.299553] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 15.594 ms 00:25:06.000 [2024-12-06 18:23:16.299563] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:06.000 [2024-12-06 18:23:16.314569] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:06.000 [2024-12-06 18:23:16.314719] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist trim metadata 00:25:06.000 [2024-12-06 18:23:16.314756] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 14.970 ms 00:25:06.000 [2024-12-06 18:23:16.314767] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:06.000 [2024-12-06 18:23:16.329039] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:06.000 [2024-12-06 18:23:16.329073] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist superblock 00:25:06.000 [2024-12-06 18:23:16.329092] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 14.232 ms 00:25:06.000 [2024-12-06 18:23:16.329118] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:06.000 [2024-12-06 18:23:16.343437] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:06.000 [2024-12-06 18:23:16.343582] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL clean state 00:25:06.000 [2024-12-06 18:23:16.343612] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 14.254 ms 00:25:06.000 [2024-12-06 18:23:16.343622] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:06.000 [2024-12-06 18:23:16.343680] ftl_debug.c: 165:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Bands validity: 00:25:06.000 [2024-12-06 18:23:16.343698] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 1: 0 / 261120 wr_cnt: 0 state: free 00:25:06.000 [2024-12-06 18:23:16.343716] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 2: 0 / 261120 wr_cnt: 0 state: free 00:25:06.000 [2024-12-06 18:23:16.343728] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 3: 0 / 261120 wr_cnt: 0 state: free 00:25:06.000 [2024-12-06 18:23:16.343744] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 4: 0 / 261120 wr_cnt: 0 state: free 00:25:06.000 [2024-12-06 18:23:16.343756] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 5: 0 / 261120 wr_cnt: 0 state: free 00:25:06.000 [2024-12-06 18:23:16.343776] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 6: 0 / 261120 wr_cnt: 0 state: free 00:25:06.000 [2024-12-06 18:23:16.343787] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 7: 0 / 261120 wr_cnt: 0 state: free 00:25:06.000 [2024-12-06 18:23:16.343803] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 8: 0 / 261120 wr_cnt: 0 state: free 00:25:06.000 [2024-12-06 18:23:16.343814] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 9: 0 / 261120 wr_cnt: 0 state: free 00:25:06.000 [2024-12-06 18:23:16.343829] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 10: 0 / 261120 wr_cnt: 0 state: free 00:25:06.000 [2024-12-06 18:23:16.343840] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 11: 0 / 261120 wr_cnt: 0 state: free 00:25:06.000 [2024-12-06 18:23:16.343856] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 12: 0 / 261120 wr_cnt: 0 state: free 00:25:06.000 [2024-12-06 18:23:16.343867] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 13: 0 / 261120 wr_cnt: 0 state: free 00:25:06.000 [2024-12-06 18:23:16.343883] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 14: 0 / 261120 wr_cnt: 0 state: free 00:25:06.000 [2024-12-06 18:23:16.343894] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 15: 0 / 261120 wr_cnt: 0 state: free 00:25:06.000 [2024-12-06 18:23:16.343911] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 16: 0 / 261120 wr_cnt: 0 state: free 00:25:06.000 [2024-12-06 18:23:16.343922] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 17: 0 / 261120 wr_cnt: 0 state: free 00:25:06.000 [2024-12-06 18:23:16.343937] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 18: 0 / 261120 wr_cnt: 0 state: free 00:25:06.000 [2024-12-06 18:23:16.343948] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 19: 0 / 261120 wr_cnt: 0 state: free 00:25:06.000 [2024-12-06 18:23:16.343963] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 20: 0 / 261120 wr_cnt: 0 state: free 00:25:06.000 [2024-12-06 18:23:16.343975] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 21: 0 / 261120 wr_cnt: 0 state: free 00:25:06.000 [2024-12-06 18:23:16.343993] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 22: 0 / 261120 wr_cnt: 0 state: free 00:25:06.000 [2024-12-06 18:23:16.344003] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 23: 0 / 261120 wr_cnt: 0 state: free 00:25:06.000 [2024-12-06 18:23:16.344016] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 24: 0 / 261120 wr_cnt: 0 state: free 00:25:06.000 [2024-12-06 18:23:16.344027] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 25: 0 / 261120 wr_cnt: 0 state: free 00:25:06.000 [2024-12-06 18:23:16.344041] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 26: 0 / 261120 wr_cnt: 0 state: free 00:25:06.000 [2024-12-06 18:23:16.344052] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 27: 0 / 261120 wr_cnt: 0 state: free 00:25:06.000 [2024-12-06 18:23:16.344066] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 28: 0 / 261120 wr_cnt: 0 state: free 00:25:06.000 [2024-12-06 18:23:16.344077] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 29: 0 / 261120 wr_cnt: 0 state: free 00:25:06.000 [2024-12-06 18:23:16.344090] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 30: 0 / 261120 wr_cnt: 0 state: free 00:25:06.000 [2024-12-06 18:23:16.344102] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 31: 0 / 261120 wr_cnt: 0 state: free 00:25:06.000 [2024-12-06 18:23:16.344116] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 32: 0 / 261120 wr_cnt: 0 state: free 00:25:06.000 [2024-12-06 18:23:16.344127] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 33: 0 / 261120 wr_cnt: 0 state: free 00:25:06.000 [2024-12-06 18:23:16.344140] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 34: 0 / 261120 wr_cnt: 0 state: free 00:25:06.000 [2024-12-06 18:23:16.344151] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 35: 0 / 261120 wr_cnt: 0 state: free 00:25:06.000 [2024-12-06 18:23:16.344164] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 36: 0 / 261120 wr_cnt: 0 state: free 00:25:06.001 [2024-12-06 18:23:16.344175] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 37: 0 / 261120 wr_cnt: 0 state: free 00:25:06.001 [2024-12-06 18:23:16.344191] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 38: 0 / 261120 wr_cnt: 0 state: free 00:25:06.001 [2024-12-06 18:23:16.344202] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 39: 0 / 261120 wr_cnt: 0 state: free 00:25:06.001 [2024-12-06 18:23:16.344215] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 40: 0 / 261120 wr_cnt: 0 state: free 00:25:06.001 [2024-12-06 18:23:16.344225] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 41: 0 / 261120 wr_cnt: 0 state: free 00:25:06.001 [2024-12-06 18:23:16.344240] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 42: 0 / 261120 wr_cnt: 0 state: free 00:25:06.001 [2024-12-06 18:23:16.344251] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 43: 0 / 261120 wr_cnt: 0 state: free 00:25:06.001 [2024-12-06 18:23:16.344280] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 44: 0 / 261120 wr_cnt: 0 state: free 00:25:06.001 [2024-12-06 18:23:16.344291] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 45: 0 / 261120 wr_cnt: 0 state: free 00:25:06.001 [2024-12-06 18:23:16.344304] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 46: 0 / 261120 wr_cnt: 0 state: free 00:25:06.001 [2024-12-06 18:23:16.344315] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 47: 0 / 261120 wr_cnt: 0 state: free 00:25:06.001 [2024-12-06 18:23:16.344329] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 48: 0 / 261120 wr_cnt: 0 state: free 00:25:06.001 [2024-12-06 18:23:16.344340] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 49: 0 / 261120 wr_cnt: 0 state: free 00:25:06.001 [2024-12-06 18:23:16.344353] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 50: 0 / 261120 wr_cnt: 0 state: free 00:25:06.001 [2024-12-06 18:23:16.344364] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 51: 0 / 261120 wr_cnt: 0 state: free 00:25:06.001 [2024-12-06 18:23:16.344380] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 52: 0 / 261120 wr_cnt: 0 state: free 00:25:06.001 [2024-12-06 18:23:16.344392] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 53: 0 / 261120 wr_cnt: 0 state: free 00:25:06.001 [2024-12-06 18:23:16.344411] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 54: 0 / 261120 wr_cnt: 0 state: free 00:25:06.001 [2024-12-06 18:23:16.344422] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 55: 0 / 261120 wr_cnt: 0 state: free 00:25:06.001 [2024-12-06 18:23:16.344438] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 56: 0 / 261120 wr_cnt: 0 state: free 00:25:06.001 [2024-12-06 18:23:16.344448] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 57: 0 / 261120 wr_cnt: 0 state: free 00:25:06.001 [2024-12-06 18:23:16.344464] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 58: 0 / 261120 wr_cnt: 0 state: free 00:25:06.001 [2024-12-06 18:23:16.344476] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 59: 0 / 261120 wr_cnt: 0 state: free 00:25:06.001 [2024-12-06 18:23:16.344491] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 60: 0 / 261120 wr_cnt: 0 state: free 00:25:06.001 [2024-12-06 18:23:16.344502] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 61: 0 / 261120 wr_cnt: 0 state: free 00:25:06.001 [2024-12-06 18:23:16.344518] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 62: 0 / 261120 wr_cnt: 0 state: free 00:25:06.001 [2024-12-06 18:23:16.344529] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 63: 0 / 261120 wr_cnt: 0 state: free 00:25:06.001 [2024-12-06 18:23:16.344545] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 64: 0 / 261120 wr_cnt: 0 state: free 00:25:06.001 [2024-12-06 18:23:16.344556] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 65: 0 / 261120 wr_cnt: 0 state: free 00:25:06.001 [2024-12-06 18:23:16.344571] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 66: 0 / 261120 wr_cnt: 0 state: free 00:25:06.001 [2024-12-06 18:23:16.344581] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 67: 0 / 261120 wr_cnt: 0 state: free 00:25:06.001 [2024-12-06 18:23:16.344597] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 68: 0 / 261120 wr_cnt: 0 state: free 00:25:06.001 [2024-12-06 18:23:16.344608] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 69: 0 / 261120 wr_cnt: 0 state: free 00:25:06.001 [2024-12-06 18:23:16.344629] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 70: 0 / 261120 wr_cnt: 0 state: free 00:25:06.001 [2024-12-06 18:23:16.344640] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 71: 0 / 261120 wr_cnt: 0 state: free 00:25:06.001 [2024-12-06 18:23:16.344655] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 72: 0 / 261120 wr_cnt: 0 state: free 00:25:06.001 [2024-12-06 18:23:16.344666] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 73: 0 / 261120 wr_cnt: 0 state: free 00:25:06.001 [2024-12-06 18:23:16.344682] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 74: 0 / 261120 wr_cnt: 0 state: free 00:25:06.001 [2024-12-06 18:23:16.344693] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 75: 0 / 261120 wr_cnt: 0 state: free 00:25:06.001 [2024-12-06 18:23:16.344708] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 76: 0 / 261120 wr_cnt: 0 state: free 00:25:06.001 [2024-12-06 18:23:16.344719] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 77: 0 / 261120 wr_cnt: 0 state: free 00:25:06.001 [2024-12-06 18:23:16.344735] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 78: 0 / 261120 wr_cnt: 0 state: free 00:25:06.001 [2024-12-06 18:23:16.344746] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 79: 0 / 261120 wr_cnt: 0 state: free 00:25:06.001 [2024-12-06 18:23:16.344758] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 80: 0 / 261120 wr_cnt: 0 state: free 00:25:06.001 [2024-12-06 18:23:16.344769] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 81: 0 / 261120 wr_cnt: 0 state: free 00:25:06.001 [2024-12-06 18:23:16.344782] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 82: 0 / 261120 wr_cnt: 0 state: free 00:25:06.001 [2024-12-06 18:23:16.344793] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 83: 0 / 261120 wr_cnt: 0 state: free 00:25:06.001 [2024-12-06 18:23:16.344806] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 84: 0 / 261120 wr_cnt: 0 state: free 00:25:06.001 [2024-12-06 18:23:16.344817] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 85: 0 / 261120 wr_cnt: 0 state: free 00:25:06.001 [2024-12-06 18:23:16.344832] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 86: 0 / 261120 wr_cnt: 0 state: free 00:25:06.001 [2024-12-06 18:23:16.344843] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 87: 0 / 261120 wr_cnt: 0 state: free 00:25:06.001 [2024-12-06 18:23:16.344856] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 88: 0 / 261120 wr_cnt: 0 state: free 00:25:06.001 [2024-12-06 18:23:16.344868] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 89: 0 / 261120 wr_cnt: 0 state: free 00:25:06.001 [2024-12-06 18:23:16.344881] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 90: 0 / 261120 wr_cnt: 0 state: free 00:25:06.001 [2024-12-06 18:23:16.344891] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 91: 0 / 261120 wr_cnt: 0 state: free 00:25:06.001 [2024-12-06 18:23:16.344904] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 92: 0 / 261120 wr_cnt: 0 state: free 00:25:06.001 [2024-12-06 18:23:16.344914] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 93: 0 / 261120 wr_cnt: 0 state: free 00:25:06.001 [2024-12-06 18:23:16.344927] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 94: 0 / 261120 wr_cnt: 0 state: free 00:25:06.001 [2024-12-06 18:23:16.344938] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 95: 0 / 261120 wr_cnt: 0 state: free 00:25:06.001 [2024-12-06 18:23:16.344952] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 96: 0 / 261120 wr_cnt: 0 state: free 00:25:06.001 [2024-12-06 18:23:16.344962] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 97: 0 / 261120 wr_cnt: 0 state: free 00:25:06.001 [2024-12-06 18:23:16.344975] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 98: 0 / 261120 wr_cnt: 0 state: free 00:25:06.001 [2024-12-06 18:23:16.344985] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 99: 0 / 261120 wr_cnt: 0 state: free 00:25:06.001 [2024-12-06 18:23:16.344998] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 100: 0 / 261120 wr_cnt: 0 state: free 00:25:06.001 [2024-12-06 18:23:16.345026] ftl_debug.c: 211:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] 00:25:06.001 [2024-12-06 18:23:16.345045] ftl_debug.c: 212:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] device UUID: 0182d045-795a-443b-ad13-478c5d3e8b79 00:25:06.001 [2024-12-06 18:23:16.345059] ftl_debug.c: 213:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total valid LBAs: 0 00:25:06.001 [2024-12-06 18:23:16.345071] ftl_debug.c: 214:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total writes: 960 00:25:06.001 [2024-12-06 18:23:16.345081] ftl_debug.c: 215:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] user writes: 0 00:25:06.001 [2024-12-06 18:23:16.345094] ftl_debug.c: 216:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] WAF: inf 00:25:06.001 [2024-12-06 18:23:16.345103] ftl_debug.c: 218:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] limits: 00:25:06.001 [2024-12-06 18:23:16.345116] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] crit: 0 00:25:06.001 [2024-12-06 18:23:16.345126] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] high: 0 00:25:06.001 [2024-12-06 18:23:16.345137] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] low: 0 00:25:06.001 [2024-12-06 18:23:16.345146] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] start: 0 00:25:06.001 [2024-12-06 18:23:16.345158] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:06.001 [2024-12-06 18:23:16.345168] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Dump statistics 00:25:06.001 [2024-12-06 18:23:16.345181] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.486 ms 00:25:06.001 [2024-12-06 18:23:16.345191] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:06.001 [2024-12-06 18:23:16.365046] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:06.001 [2024-12-06 18:23:16.365082] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize L2P 00:25:06.001 [2024-12-06 18:23:16.365101] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 19.858 ms 00:25:06.001 [2024-12-06 18:23:16.365111] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:06.001 [2024-12-06 18:23:16.365716] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:06.001 [2024-12-06 18:23:16.365740] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize P2L checkpointing 00:25:06.001 [2024-12-06 18:23:16.365756] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.551 ms 00:25:06.001 [2024-12-06 18:23:16.365766] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:06.001 [2024-12-06 18:23:16.436039] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:25:06.001 [2024-12-06 18:23:16.436091] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:25:06.001 [2024-12-06 18:23:16.436108] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:25:06.001 [2024-12-06 18:23:16.436119] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:06.001 [2024-12-06 18:23:16.436225] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:25:06.001 [2024-12-06 18:23:16.436237] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:25:06.001 [2024-12-06 18:23:16.436254] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:25:06.001 [2024-12-06 18:23:16.436289] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:06.001 [2024-12-06 18:23:16.436354] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:25:06.001 [2024-12-06 18:23:16.436368] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:25:06.001 [2024-12-06 18:23:16.436388] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:25:06.001 [2024-12-06 18:23:16.436398] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:06.001 [2024-12-06 18:23:16.436424] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:25:06.001 [2024-12-06 18:23:16.436435] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:25:06.001 [2024-12-06 18:23:16.436449] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:25:06.001 [2024-12-06 18:23:16.436464] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:06.001 [2024-12-06 18:23:16.561775] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:25:06.001 [2024-12-06 18:23:16.561845] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:25:06.001 [2024-12-06 18:23:16.561867] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:25:06.001 [2024-12-06 18:23:16.561878] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:06.260 [2024-12-06 18:23:16.664931] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:25:06.260 [2024-12-06 18:23:16.664993] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:25:06.260 [2024-12-06 18:23:16.665011] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:25:06.260 [2024-12-06 18:23:16.665041] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:06.260 [2024-12-06 18:23:16.665158] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:25:06.260 [2024-12-06 18:23:16.665171] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:25:06.260 [2024-12-06 18:23:16.665187] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:25:06.260 [2024-12-06 18:23:16.665198] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:06.260 [2024-12-06 18:23:16.665229] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:25:06.260 [2024-12-06 18:23:16.665240] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:25:06.260 [2024-12-06 18:23:16.665253] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:25:06.260 [2024-12-06 18:23:16.665263] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:06.260 [2024-12-06 18:23:16.665409] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:25:06.260 [2024-12-06 18:23:16.665424] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:25:06.260 [2024-12-06 18:23:16.665437] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:25:06.260 [2024-12-06 18:23:16.665447] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:06.260 [2024-12-06 18:23:16.665489] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:25:06.260 [2024-12-06 18:23:16.665501] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize superblock 00:25:06.260 [2024-12-06 18:23:16.665514] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:25:06.260 [2024-12-06 18:23:16.665524] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:06.260 [2024-12-06 18:23:16.665570] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:25:06.260 [2024-12-06 18:23:16.665581] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:25:06.260 [2024-12-06 18:23:16.665596] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:25:06.260 [2024-12-06 18:23:16.665606] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:06.260 [2024-12-06 18:23:16.665650] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:25:06.260 [2024-12-06 18:23:16.665662] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:25:06.260 [2024-12-06 18:23:16.665675] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:25:06.260 [2024-12-06 18:23:16.665685] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:06.260 [2024-12-06 18:23:16.665825] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL shutdown', duration = 422.702 ms, result 0 00:25:07.196 18:23:17 ftl.ftl_trim -- ftl/trim.sh@84 -- # file=/home/vagrant/spdk_repo/spdk/test/ftl/data 00:25:07.196 18:23:17 ftl.ftl_trim -- ftl/trim.sh@85 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=ftl0 --of=/home/vagrant/spdk_repo/spdk/test/ftl/data --count=65536 --json=/home/vagrant/spdk_repo/spdk/test/ftl/config/ftl.json 00:25:07.455 [2024-12-06 18:23:17.785665] Starting SPDK v25.01-pre git sha1 1148849d6 / DPDK 24.03.0 initialization... 00:25:07.455 [2024-12-06 18:23:17.785991] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid78554 ] 00:25:07.455 [2024-12-06 18:23:17.966425] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:25:07.713 [2024-12-06 18:23:18.077428] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:25:07.971 [2024-12-06 18:23:18.440438] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:25:07.971 [2024-12-06 18:23:18.440679] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:25:08.229 [2024-12-06 18:23:18.602587] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:08.229 [2024-12-06 18:23:18.602640] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Check configuration 00:25:08.229 [2024-12-06 18:23:18.602656] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.006 ms 00:25:08.229 [2024-12-06 18:23:18.602666] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:08.229 [2024-12-06 18:23:18.605814] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:08.230 [2024-12-06 18:23:18.605852] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:25:08.230 [2024-12-06 18:23:18.605865] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.132 ms 00:25:08.230 [2024-12-06 18:23:18.605891] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:08.230 [2024-12-06 18:23:18.605989] mngt/ftl_mngt_bdev.c: 196:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using nvc0n1p0 as write buffer cache 00:25:08.230 [2024-12-06 18:23:18.606940] mngt/ftl_mngt_bdev.c: 236:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using bdev as NV Cache device 00:25:08.230 [2024-12-06 18:23:18.606970] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:08.230 [2024-12-06 18:23:18.606982] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:25:08.230 [2024-12-06 18:23:18.606994] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.991 ms 00:25:08.230 [2024-12-06 18:23:18.607004] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:08.230 [2024-12-06 18:23:18.608475] mngt/ftl_mngt_md.c: 455:ftl_mngt_load_sb: *NOTICE*: [FTL][ftl0] SHM: clean 0, shm_clean 0 00:25:08.230 [2024-12-06 18:23:18.627434] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:08.230 [2024-12-06 18:23:18.627589] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Load super block 00:25:08.230 [2024-12-06 18:23:18.627627] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 18.991 ms 00:25:08.230 [2024-12-06 18:23:18.627639] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:08.230 [2024-12-06 18:23:18.627737] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:08.230 [2024-12-06 18:23:18.627752] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Validate super block 00:25:08.230 [2024-12-06 18:23:18.627763] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.025 ms 00:25:08.230 [2024-12-06 18:23:18.627773] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:08.230 [2024-12-06 18:23:18.634398] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:08.230 [2024-12-06 18:23:18.634428] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:25:08.230 [2024-12-06 18:23:18.634440] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 6.594 ms 00:25:08.230 [2024-12-06 18:23:18.634450] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:08.230 [2024-12-06 18:23:18.634553] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:08.230 [2024-12-06 18:23:18.634568] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:25:08.230 [2024-12-06 18:23:18.634579] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.067 ms 00:25:08.230 [2024-12-06 18:23:18.634589] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:08.230 [2024-12-06 18:23:18.634619] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:08.230 [2024-12-06 18:23:18.634630] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Register IO device 00:25:08.230 [2024-12-06 18:23:18.634641] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.006 ms 00:25:08.230 [2024-12-06 18:23:18.634651] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:08.230 [2024-12-06 18:23:18.634673] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on ftl_core_thread 00:25:08.230 [2024-12-06 18:23:18.639577] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:08.230 [2024-12-06 18:23:18.639612] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:25:08.230 [2024-12-06 18:23:18.639624] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.917 ms 00:25:08.230 [2024-12-06 18:23:18.639635] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:08.230 [2024-12-06 18:23:18.639707] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:08.230 [2024-12-06 18:23:18.639719] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Decorate bands 00:25:08.230 [2024-12-06 18:23:18.639730] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.009 ms 00:25:08.230 [2024-12-06 18:23:18.639740] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:08.230 [2024-12-06 18:23:18.639766] ftl_layout.c: 613:ftl_layout_setup: *NOTICE*: [FTL][ftl0] FTL layout setup mode 0 00:25:08.230 [2024-12-06 18:23:18.639790] upgrade/ftl_sb_v5.c: 278:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob load 0x150 bytes 00:25:08.230 [2024-12-06 18:23:18.639824] upgrade/ftl_sb_v5.c: 287:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] base layout blob load 0x48 bytes 00:25:08.230 [2024-12-06 18:23:18.639842] upgrade/ftl_sb_v5.c: 294:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] layout blob load 0x190 bytes 00:25:08.230 [2024-12-06 18:23:18.639928] upgrade/ftl_sb_v5.c: 92:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob store 0x150 bytes 00:25:08.230 [2024-12-06 18:23:18.639941] upgrade/ftl_sb_v5.c: 101:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] base layout blob store 0x48 bytes 00:25:08.230 [2024-12-06 18:23:18.639954] upgrade/ftl_sb_v5.c: 109:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] layout blob store 0x190 bytes 00:25:08.230 [2024-12-06 18:23:18.639969] ftl_layout.c: 685:ftl_layout_setup: *NOTICE*: [FTL][ftl0] Base device capacity: 103424.00 MiB 00:25:08.230 [2024-12-06 18:23:18.639981] ftl_layout.c: 687:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache device capacity: 5171.00 MiB 00:25:08.230 [2024-12-06 18:23:18.639993] ftl_layout.c: 689:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P entries: 23592960 00:25:08.230 [2024-12-06 18:23:18.640003] ftl_layout.c: 690:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P address size: 4 00:25:08.230 [2024-12-06 18:23:18.640013] ftl_layout.c: 691:ftl_layout_setup: *NOTICE*: [FTL][ftl0] P2L checkpoint pages: 2048 00:25:08.230 [2024-12-06 18:23:18.640022] ftl_layout.c: 692:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache chunk count 5 00:25:08.230 [2024-12-06 18:23:18.640033] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:08.230 [2024-12-06 18:23:18.640043] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize layout 00:25:08.230 [2024-12-06 18:23:18.640054] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.269 ms 00:25:08.230 [2024-12-06 18:23:18.640064] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:08.230 [2024-12-06 18:23:18.640140] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:08.230 [2024-12-06 18:23:18.640155] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Verify layout 00:25:08.230 [2024-12-06 18:23:18.640165] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.055 ms 00:25:08.230 [2024-12-06 18:23:18.640175] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:08.230 [2024-12-06 18:23:18.640286] ftl_layout.c: 768:ftl_layout_dump: *NOTICE*: [FTL][ftl0] NV cache layout: 00:25:08.230 [2024-12-06 18:23:18.640301] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb 00:25:08.230 [2024-12-06 18:23:18.640312] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:25:08.230 [2024-12-06 18:23:18.640321] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:25:08.230 [2024-12-06 18:23:18.640332] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region l2p 00:25:08.230 [2024-12-06 18:23:18.640342] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.12 MiB 00:25:08.230 [2024-12-06 18:23:18.640351] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 90.00 MiB 00:25:08.230 [2024-12-06 18:23:18.640361] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md 00:25:08.230 [2024-12-06 18:23:18.640371] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 90.12 MiB 00:25:08.230 [2024-12-06 18:23:18.640380] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:25:08.230 [2024-12-06 18:23:18.640390] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md_mirror 00:25:08.230 [2024-12-06 18:23:18.640412] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 90.62 MiB 00:25:08.230 [2024-12-06 18:23:18.640422] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:25:08.230 [2024-12-06 18:23:18.640431] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md 00:25:08.230 [2024-12-06 18:23:18.640441] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.88 MiB 00:25:08.230 [2024-12-06 18:23:18.640450] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:25:08.230 [2024-12-06 18:23:18.640459] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md_mirror 00:25:08.230 [2024-12-06 18:23:18.640469] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 124.00 MiB 00:25:08.230 [2024-12-06 18:23:18.640478] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:25:08.230 [2024-12-06 18:23:18.640487] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l0 00:25:08.230 [2024-12-06 18:23:18.640497] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 91.12 MiB 00:25:08.230 [2024-12-06 18:23:18.640506] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:25:08.230 [2024-12-06 18:23:18.640515] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l1 00:25:08.230 [2024-12-06 18:23:18.640524] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 99.12 MiB 00:25:08.230 [2024-12-06 18:23:18.640533] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:25:08.230 [2024-12-06 18:23:18.640541] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l2 00:25:08.230 [2024-12-06 18:23:18.640551] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 107.12 MiB 00:25:08.230 [2024-12-06 18:23:18.640559] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:25:08.230 [2024-12-06 18:23:18.640568] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l3 00:25:08.230 [2024-12-06 18:23:18.640577] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 115.12 MiB 00:25:08.230 [2024-12-06 18:23:18.640585] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:25:08.230 [2024-12-06 18:23:18.640595] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md 00:25:08.230 [2024-12-06 18:23:18.640604] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.12 MiB 00:25:08.230 [2024-12-06 18:23:18.640612] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:25:08.230 [2024-12-06 18:23:18.640621] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md_mirror 00:25:08.230 [2024-12-06 18:23:18.640630] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.38 MiB 00:25:08.230 [2024-12-06 18:23:18.640639] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:25:08.230 [2024-12-06 18:23:18.640648] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log 00:25:08.230 [2024-12-06 18:23:18.640657] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.62 MiB 00:25:08.230 [2024-12-06 18:23:18.640666] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:25:08.230 [2024-12-06 18:23:18.640675] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log_mirror 00:25:08.230 [2024-12-06 18:23:18.640684] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.75 MiB 00:25:08.230 [2024-12-06 18:23:18.640693] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:25:08.230 [2024-12-06 18:23:18.640703] ftl_layout.c: 775:ftl_layout_dump: *NOTICE*: [FTL][ftl0] Base device layout: 00:25:08.230 [2024-12-06 18:23:18.640713] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb_mirror 00:25:08.231 [2024-12-06 18:23:18.640726] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:25:08.231 [2024-12-06 18:23:18.640736] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:25:08.231 [2024-12-06 18:23:18.640746] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region vmap 00:25:08.231 [2024-12-06 18:23:18.640756] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 102400.25 MiB 00:25:08.231 [2024-12-06 18:23:18.640765] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 3.38 MiB 00:25:08.231 [2024-12-06 18:23:18.640774] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region data_btm 00:25:08.231 [2024-12-06 18:23:18.640783] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.25 MiB 00:25:08.231 [2024-12-06 18:23:18.640792] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 102400.00 MiB 00:25:08.231 [2024-12-06 18:23:18.640803] upgrade/ftl_sb_v5.c: 408:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - nvc: 00:25:08.231 [2024-12-06 18:23:18.640815] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x0 ver:5 blk_offs:0x0 blk_sz:0x20 00:25:08.231 [2024-12-06 18:23:18.640827] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x2 ver:0 blk_offs:0x20 blk_sz:0x5a00 00:25:08.231 [2024-12-06 18:23:18.640837] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x3 ver:2 blk_offs:0x5a20 blk_sz:0x80 00:25:08.231 [2024-12-06 18:23:18.640847] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x4 ver:2 blk_offs:0x5aa0 blk_sz:0x80 00:25:08.231 [2024-12-06 18:23:18.640858] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xa ver:2 blk_offs:0x5b20 blk_sz:0x800 00:25:08.231 [2024-12-06 18:23:18.640868] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xb ver:2 blk_offs:0x6320 blk_sz:0x800 00:25:08.231 [2024-12-06 18:23:18.640879] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xc ver:2 blk_offs:0x6b20 blk_sz:0x800 00:25:08.231 [2024-12-06 18:23:18.640889] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xd ver:2 blk_offs:0x7320 blk_sz:0x800 00:25:08.231 [2024-12-06 18:23:18.640899] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xe ver:0 blk_offs:0x7b20 blk_sz:0x40 00:25:08.231 [2024-12-06 18:23:18.640909] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xf ver:0 blk_offs:0x7b60 blk_sz:0x40 00:25:08.231 [2024-12-06 18:23:18.640920] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x10 ver:1 blk_offs:0x7ba0 blk_sz:0x20 00:25:08.231 [2024-12-06 18:23:18.640929] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x11 ver:1 blk_offs:0x7bc0 blk_sz:0x20 00:25:08.231 [2024-12-06 18:23:18.640940] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x6 ver:2 blk_offs:0x7be0 blk_sz:0x20 00:25:08.231 [2024-12-06 18:23:18.640950] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x7 ver:2 blk_offs:0x7c00 blk_sz:0x20 00:25:08.231 [2024-12-06 18:23:18.640960] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x7c20 blk_sz:0x13b6e0 00:25:08.231 [2024-12-06 18:23:18.640971] upgrade/ftl_sb_v5.c: 422:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - base dev: 00:25:08.231 [2024-12-06 18:23:18.640982] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x1 ver:5 blk_offs:0x0 blk_sz:0x20 00:25:08.231 [2024-12-06 18:23:18.640993] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x20 blk_sz:0x20 00:25:08.231 [2024-12-06 18:23:18.641005] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x9 ver:0 blk_offs:0x40 blk_sz:0x1900000 00:25:08.231 [2024-12-06 18:23:18.641015] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x5 ver:0 blk_offs:0x1900040 blk_sz:0x360 00:25:08.231 [2024-12-06 18:23:18.641025] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x19003a0 blk_sz:0x3fc60 00:25:08.231 [2024-12-06 18:23:18.641036] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:08.231 [2024-12-06 18:23:18.641050] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Layout upgrade 00:25:08.231 [2024-12-06 18:23:18.641060] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.826 ms 00:25:08.231 [2024-12-06 18:23:18.641070] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:08.231 [2024-12-06 18:23:18.680076] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:08.231 [2024-12-06 18:23:18.680230] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:25:08.231 [2024-12-06 18:23:18.680366] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 39.010 ms 00:25:08.231 [2024-12-06 18:23:18.680408] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:08.231 [2024-12-06 18:23:18.680553] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:08.231 [2024-12-06 18:23:18.680628] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize band addresses 00:25:08.231 [2024-12-06 18:23:18.680722] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.051 ms 00:25:08.231 [2024-12-06 18:23:18.680752] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:08.231 [2024-12-06 18:23:18.736859] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:08.231 [2024-12-06 18:23:18.737145] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:25:08.231 [2024-12-06 18:23:18.737252] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 56.150 ms 00:25:08.231 [2024-12-06 18:23:18.737311] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:08.231 [2024-12-06 18:23:18.737477] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:08.231 [2024-12-06 18:23:18.737630] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:25:08.231 [2024-12-06 18:23:18.737712] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.004 ms 00:25:08.231 [2024-12-06 18:23:18.737742] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:08.231 [2024-12-06 18:23:18.738203] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:08.231 [2024-12-06 18:23:18.738253] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:25:08.231 [2024-12-06 18:23:18.738495] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.413 ms 00:25:08.231 [2024-12-06 18:23:18.738534] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:08.231 [2024-12-06 18:23:18.738692] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:08.231 [2024-12-06 18:23:18.738734] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:25:08.231 [2024-12-06 18:23:18.738827] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.100 ms 00:25:08.231 [2024-12-06 18:23:18.738864] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:08.231 [2024-12-06 18:23:18.757982] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:08.231 [2024-12-06 18:23:18.758136] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:25:08.231 [2024-12-06 18:23:18.758274] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 19.098 ms 00:25:08.231 [2024-12-06 18:23:18.758314] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:08.231 [2024-12-06 18:23:18.777671] ftl_nv_cache.c:1772:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: full chunks = 1, empty chunks = 3 00:25:08.231 [2024-12-06 18:23:18.777844] ftl_nv_cache.c:1776:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: state loaded successfully 00:25:08.231 [2024-12-06 18:23:18.777944] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:08.231 [2024-12-06 18:23:18.777977] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore NV cache metadata 00:25:08.231 [2024-12-06 18:23:18.778009] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 19.508 ms 00:25:08.231 [2024-12-06 18:23:18.778039] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:08.489 [2024-12-06 18:23:18.807572] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:08.489 [2024-12-06 18:23:18.807741] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore valid map metadata 00:25:08.489 [2024-12-06 18:23:18.807871] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 29.482 ms 00:25:08.489 [2024-12-06 18:23:18.807911] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:08.489 [2024-12-06 18:23:18.826504] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:08.489 [2024-12-06 18:23:18.826643] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore band info metadata 00:25:08.489 [2024-12-06 18:23:18.826721] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 18.521 ms 00:25:08.489 [2024-12-06 18:23:18.826756] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:08.489 [2024-12-06 18:23:18.844817] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:08.489 [2024-12-06 18:23:18.844966] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore trim metadata 00:25:08.489 [2024-12-06 18:23:18.845074] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 17.991 ms 00:25:08.489 [2024-12-06 18:23:18.845110] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:08.489 [2024-12-06 18:23:18.845924] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:08.489 [2024-12-06 18:23:18.845950] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize P2L checkpointing 00:25:08.489 [2024-12-06 18:23:18.845963] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.664 ms 00:25:08.489 [2024-12-06 18:23:18.845973] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:08.489 [2024-12-06 18:23:18.931836] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:08.489 [2024-12-06 18:23:18.931908] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore P2L checkpoints 00:25:08.489 [2024-12-06 18:23:18.931926] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 85.970 ms 00:25:08.489 [2024-12-06 18:23:18.931936] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:08.489 [2024-12-06 18:23:18.943281] ftl_l2p_cache.c: 458:ftl_l2p_cache_init: *NOTICE*: l2p maximum resident size is: 59 (of 60) MiB 00:25:08.489 [2024-12-06 18:23:18.959689] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:08.489 [2024-12-06 18:23:18.959735] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize L2P 00:25:08.489 [2024-12-06 18:23:18.959750] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 27.643 ms 00:25:08.489 [2024-12-06 18:23:18.959781] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:08.489 [2024-12-06 18:23:18.959912] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:08.489 [2024-12-06 18:23:18.959926] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore L2P 00:25:08.489 [2024-12-06 18:23:18.959937] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.005 ms 00:25:08.489 [2024-12-06 18:23:18.959948] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:08.489 [2024-12-06 18:23:18.960004] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:08.489 [2024-12-06 18:23:18.960016] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize band initialization 00:25:08.489 [2024-12-06 18:23:18.960027] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.034 ms 00:25:08.489 [2024-12-06 18:23:18.960041] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:08.489 [2024-12-06 18:23:18.960075] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:08.489 [2024-12-06 18:23:18.960088] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Start core poller 00:25:08.489 [2024-12-06 18:23:18.960099] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.012 ms 00:25:08.489 [2024-12-06 18:23:18.960109] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:08.489 [2024-12-06 18:23:18.960146] mngt/ftl_mngt_self_test.c: 208:ftl_mngt_self_test: *NOTICE*: [FTL][ftl0] Self test skipped 00:25:08.489 [2024-12-06 18:23:18.960158] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:08.489 [2024-12-06 18:23:18.960169] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Self test on startup 00:25:08.489 [2024-12-06 18:23:18.960179] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.013 ms 00:25:08.489 [2024-12-06 18:23:18.960189] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:08.489 [2024-12-06 18:23:18.996673] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:08.489 [2024-12-06 18:23:18.996712] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL dirty state 00:25:08.489 [2024-12-06 18:23:18.996726] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 36.521 ms 00:25:08.489 [2024-12-06 18:23:18.996736] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:08.489 [2024-12-06 18:23:18.996846] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:08.490 [2024-12-06 18:23:18.996860] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize initialization 00:25:08.490 [2024-12-06 18:23:18.996871] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.031 ms 00:25:08.490 [2024-12-06 18:23:18.996880] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:08.490 [2024-12-06 18:23:18.997827] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:25:08.490 [2024-12-06 18:23:19.002079] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL startup', duration = 395.611 ms, result 0 00:25:08.490 [2024-12-06 18:23:19.003031] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:25:08.490 [2024-12-06 18:23:19.021374] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:25:09.861  [2024-12-06T18:23:21.374Z] Copying: 28/256 [MB] (28 MBps) [2024-12-06T18:23:22.310Z] Copying: 52/256 [MB] (24 MBps) [2024-12-06T18:23:23.246Z] Copying: 77/256 [MB] (24 MBps) [2024-12-06T18:23:24.255Z] Copying: 102/256 [MB] (24 MBps) [2024-12-06T18:23:25.194Z] Copying: 124/256 [MB] (22 MBps) [2024-12-06T18:23:26.133Z] Copying: 148/256 [MB] (23 MBps) [2024-12-06T18:23:27.068Z] Copying: 171/256 [MB] (23 MBps) [2024-12-06T18:23:28.445Z] Copying: 196/256 [MB] (24 MBps) [2024-12-06T18:23:29.013Z] Copying: 220/256 [MB] (24 MBps) [2024-12-06T18:23:29.581Z] Copying: 243/256 [MB] (23 MBps) [2024-12-06T18:23:29.581Z] Copying: 256/256 [MB] (average 24 MBps)[2024-12-06 18:23:29.565742] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:25:19.264 [2024-12-06 18:23:29.580362] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:19.264 [2024-12-06 18:23:29.580405] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinit core IO channel 00:25:19.264 [2024-12-06 18:23:29.580431] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.002 ms 00:25:19.264 [2024-12-06 18:23:29.580441] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:19.264 [2024-12-06 18:23:29.580464] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on ftl_core_thread 00:25:19.264 [2024-12-06 18:23:29.584600] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:19.264 [2024-12-06 18:23:29.584630] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Unregister IO device 00:25:19.264 [2024-12-06 18:23:29.584642] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.128 ms 00:25:19.264 [2024-12-06 18:23:29.584652] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:19.264 [2024-12-06 18:23:29.584891] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:19.264 [2024-12-06 18:23:29.584904] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Stop core poller 00:25:19.264 [2024-12-06 18:23:29.584915] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.201 ms 00:25:19.264 [2024-12-06 18:23:29.584925] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:19.264 [2024-12-06 18:23:29.587826] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:19.264 [2024-12-06 18:23:29.587981] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist L2P 00:25:19.264 [2024-12-06 18:23:29.588001] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.882 ms 00:25:19.264 [2024-12-06 18:23:29.588011] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:19.264 [2024-12-06 18:23:29.593554] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:19.264 [2024-12-06 18:23:29.593586] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finish L2P trims 00:25:19.264 [2024-12-06 18:23:29.593597] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 5.526 ms 00:25:19.264 [2024-12-06 18:23:29.593607] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:19.264 [2024-12-06 18:23:29.629033] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:19.264 [2024-12-06 18:23:29.629070] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist NV cache metadata 00:25:19.264 [2024-12-06 18:23:29.629082] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 35.414 ms 00:25:19.264 [2024-12-06 18:23:29.629108] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:19.265 [2024-12-06 18:23:29.650047] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:19.265 [2024-12-06 18:23:29.650084] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist valid map metadata 00:25:19.265 [2024-12-06 18:23:29.650109] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 20.918 ms 00:25:19.265 [2024-12-06 18:23:29.650119] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:19.265 [2024-12-06 18:23:29.650283] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:19.265 [2024-12-06 18:23:29.650297] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist P2L metadata 00:25:19.265 [2024-12-06 18:23:29.650338] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.096 ms 00:25:19.265 [2024-12-06 18:23:29.650348] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:19.265 [2024-12-06 18:23:29.686358] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:19.265 [2024-12-06 18:23:29.686403] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist band info metadata 00:25:19.265 [2024-12-06 18:23:29.686416] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 36.049 ms 00:25:19.265 [2024-12-06 18:23:29.686426] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:19.265 [2024-12-06 18:23:29.723064] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:19.265 [2024-12-06 18:23:29.723105] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist trim metadata 00:25:19.265 [2024-12-06 18:23:29.723119] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 36.641 ms 00:25:19.265 [2024-12-06 18:23:29.723129] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:19.265 [2024-12-06 18:23:29.758450] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:19.265 [2024-12-06 18:23:29.758486] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist superblock 00:25:19.265 [2024-12-06 18:23:29.758500] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 35.317 ms 00:25:19.265 [2024-12-06 18:23:29.758525] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:19.265 [2024-12-06 18:23:29.794469] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:19.265 [2024-12-06 18:23:29.794509] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL clean state 00:25:19.265 [2024-12-06 18:23:29.794522] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 35.889 ms 00:25:19.265 [2024-12-06 18:23:29.794532] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:19.265 [2024-12-06 18:23:29.794588] ftl_debug.c: 165:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Bands validity: 00:25:19.265 [2024-12-06 18:23:29.794605] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 1: 0 / 261120 wr_cnt: 0 state: free 00:25:19.265 [2024-12-06 18:23:29.794618] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 2: 0 / 261120 wr_cnt: 0 state: free 00:25:19.265 [2024-12-06 18:23:29.794629] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 3: 0 / 261120 wr_cnt: 0 state: free 00:25:19.265 [2024-12-06 18:23:29.794640] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 4: 0 / 261120 wr_cnt: 0 state: free 00:25:19.265 [2024-12-06 18:23:29.794650] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 5: 0 / 261120 wr_cnt: 0 state: free 00:25:19.265 [2024-12-06 18:23:29.794661] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 6: 0 / 261120 wr_cnt: 0 state: free 00:25:19.265 [2024-12-06 18:23:29.794672] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 7: 0 / 261120 wr_cnt: 0 state: free 00:25:19.265 [2024-12-06 18:23:29.794683] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 8: 0 / 261120 wr_cnt: 0 state: free 00:25:19.265 [2024-12-06 18:23:29.794693] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 9: 0 / 261120 wr_cnt: 0 state: free 00:25:19.265 [2024-12-06 18:23:29.794704] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 10: 0 / 261120 wr_cnt: 0 state: free 00:25:19.265 [2024-12-06 18:23:29.794714] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 11: 0 / 261120 wr_cnt: 0 state: free 00:25:19.265 [2024-12-06 18:23:29.794724] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 12: 0 / 261120 wr_cnt: 0 state: free 00:25:19.265 [2024-12-06 18:23:29.794735] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 13: 0 / 261120 wr_cnt: 0 state: free 00:25:19.265 [2024-12-06 18:23:29.794745] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 14: 0 / 261120 wr_cnt: 0 state: free 00:25:19.265 [2024-12-06 18:23:29.794755] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 15: 0 / 261120 wr_cnt: 0 state: free 00:25:19.265 [2024-12-06 18:23:29.794765] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 16: 0 / 261120 wr_cnt: 0 state: free 00:25:19.265 [2024-12-06 18:23:29.794775] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 17: 0 / 261120 wr_cnt: 0 state: free 00:25:19.265 [2024-12-06 18:23:29.794785] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 18: 0 / 261120 wr_cnt: 0 state: free 00:25:19.265 [2024-12-06 18:23:29.794796] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 19: 0 / 261120 wr_cnt: 0 state: free 00:25:19.265 [2024-12-06 18:23:29.794806] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 20: 0 / 261120 wr_cnt: 0 state: free 00:25:19.265 [2024-12-06 18:23:29.794816] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 21: 0 / 261120 wr_cnt: 0 state: free 00:25:19.265 [2024-12-06 18:23:29.794826] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 22: 0 / 261120 wr_cnt: 0 state: free 00:25:19.265 [2024-12-06 18:23:29.794837] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 23: 0 / 261120 wr_cnt: 0 state: free 00:25:19.265 [2024-12-06 18:23:29.794847] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 24: 0 / 261120 wr_cnt: 0 state: free 00:25:19.265 [2024-12-06 18:23:29.794857] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 25: 0 / 261120 wr_cnt: 0 state: free 00:25:19.265 [2024-12-06 18:23:29.794868] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 26: 0 / 261120 wr_cnt: 0 state: free 00:25:19.265 [2024-12-06 18:23:29.794878] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 27: 0 / 261120 wr_cnt: 0 state: free 00:25:19.265 [2024-12-06 18:23:29.794889] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 28: 0 / 261120 wr_cnt: 0 state: free 00:25:19.265 [2024-12-06 18:23:29.794899] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 29: 0 / 261120 wr_cnt: 0 state: free 00:25:19.265 [2024-12-06 18:23:29.794910] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 30: 0 / 261120 wr_cnt: 0 state: free 00:25:19.265 [2024-12-06 18:23:29.794921] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 31: 0 / 261120 wr_cnt: 0 state: free 00:25:19.265 [2024-12-06 18:23:29.794933] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 32: 0 / 261120 wr_cnt: 0 state: free 00:25:19.265 [2024-12-06 18:23:29.794944] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 33: 0 / 261120 wr_cnt: 0 state: free 00:25:19.265 [2024-12-06 18:23:29.794955] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 34: 0 / 261120 wr_cnt: 0 state: free 00:25:19.265 [2024-12-06 18:23:29.794966] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 35: 0 / 261120 wr_cnt: 0 state: free 00:25:19.265 [2024-12-06 18:23:29.794976] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 36: 0 / 261120 wr_cnt: 0 state: free 00:25:19.265 [2024-12-06 18:23:29.794986] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 37: 0 / 261120 wr_cnt: 0 state: free 00:25:19.265 [2024-12-06 18:23:29.794997] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 38: 0 / 261120 wr_cnt: 0 state: free 00:25:19.265 [2024-12-06 18:23:29.795007] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 39: 0 / 261120 wr_cnt: 0 state: free 00:25:19.265 [2024-12-06 18:23:29.795017] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 40: 0 / 261120 wr_cnt: 0 state: free 00:25:19.265 [2024-12-06 18:23:29.795028] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 41: 0 / 261120 wr_cnt: 0 state: free 00:25:19.265 [2024-12-06 18:23:29.795038] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 42: 0 / 261120 wr_cnt: 0 state: free 00:25:19.265 [2024-12-06 18:23:29.795048] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 43: 0 / 261120 wr_cnt: 0 state: free 00:25:19.265 [2024-12-06 18:23:29.795059] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 44: 0 / 261120 wr_cnt: 0 state: free 00:25:19.265 [2024-12-06 18:23:29.795070] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 45: 0 / 261120 wr_cnt: 0 state: free 00:25:19.265 [2024-12-06 18:23:29.795080] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 46: 0 / 261120 wr_cnt: 0 state: free 00:25:19.265 [2024-12-06 18:23:29.795090] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 47: 0 / 261120 wr_cnt: 0 state: free 00:25:19.265 [2024-12-06 18:23:29.795100] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 48: 0 / 261120 wr_cnt: 0 state: free 00:25:19.265 [2024-12-06 18:23:29.795111] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 49: 0 / 261120 wr_cnt: 0 state: free 00:25:19.265 [2024-12-06 18:23:29.795121] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 50: 0 / 261120 wr_cnt: 0 state: free 00:25:19.265 [2024-12-06 18:23:29.795132] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 51: 0 / 261120 wr_cnt: 0 state: free 00:25:19.265 [2024-12-06 18:23:29.795142] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 52: 0 / 261120 wr_cnt: 0 state: free 00:25:19.265 [2024-12-06 18:23:29.795152] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 53: 0 / 261120 wr_cnt: 0 state: free 00:25:19.265 [2024-12-06 18:23:29.795162] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 54: 0 / 261120 wr_cnt: 0 state: free 00:25:19.265 [2024-12-06 18:23:29.795172] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 55: 0 / 261120 wr_cnt: 0 state: free 00:25:19.265 [2024-12-06 18:23:29.795182] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 56: 0 / 261120 wr_cnt: 0 state: free 00:25:19.265 [2024-12-06 18:23:29.795193] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 57: 0 / 261120 wr_cnt: 0 state: free 00:25:19.265 [2024-12-06 18:23:29.795203] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 58: 0 / 261120 wr_cnt: 0 state: free 00:25:19.265 [2024-12-06 18:23:29.795214] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 59: 0 / 261120 wr_cnt: 0 state: free 00:25:19.265 [2024-12-06 18:23:29.795224] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 60: 0 / 261120 wr_cnt: 0 state: free 00:25:19.265 [2024-12-06 18:23:29.795236] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 61: 0 / 261120 wr_cnt: 0 state: free 00:25:19.265 [2024-12-06 18:23:29.795246] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 62: 0 / 261120 wr_cnt: 0 state: free 00:25:19.265 [2024-12-06 18:23:29.795257] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 63: 0 / 261120 wr_cnt: 0 state: free 00:25:19.265 [2024-12-06 18:23:29.795279] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 64: 0 / 261120 wr_cnt: 0 state: free 00:25:19.265 [2024-12-06 18:23:29.795291] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 65: 0 / 261120 wr_cnt: 0 state: free 00:25:19.265 [2024-12-06 18:23:29.795301] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 66: 0 / 261120 wr_cnt: 0 state: free 00:25:19.265 [2024-12-06 18:23:29.795312] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 67: 0 / 261120 wr_cnt: 0 state: free 00:25:19.265 [2024-12-06 18:23:29.795322] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 68: 0 / 261120 wr_cnt: 0 state: free 00:25:19.266 [2024-12-06 18:23:29.795332] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 69: 0 / 261120 wr_cnt: 0 state: free 00:25:19.266 [2024-12-06 18:23:29.795343] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 70: 0 / 261120 wr_cnt: 0 state: free 00:25:19.266 [2024-12-06 18:23:29.795360] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 71: 0 / 261120 wr_cnt: 0 state: free 00:25:19.266 [2024-12-06 18:23:29.795372] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 72: 0 / 261120 wr_cnt: 0 state: free 00:25:19.266 [2024-12-06 18:23:29.795382] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 73: 0 / 261120 wr_cnt: 0 state: free 00:25:19.266 [2024-12-06 18:23:29.795393] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 74: 0 / 261120 wr_cnt: 0 state: free 00:25:19.266 [2024-12-06 18:23:29.795403] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 75: 0 / 261120 wr_cnt: 0 state: free 00:25:19.266 [2024-12-06 18:23:29.795414] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 76: 0 / 261120 wr_cnt: 0 state: free 00:25:19.266 [2024-12-06 18:23:29.795424] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 77: 0 / 261120 wr_cnt: 0 state: free 00:25:19.266 [2024-12-06 18:23:29.795435] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 78: 0 / 261120 wr_cnt: 0 state: free 00:25:19.266 [2024-12-06 18:23:29.795445] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 79: 0 / 261120 wr_cnt: 0 state: free 00:25:19.266 [2024-12-06 18:23:29.795456] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 80: 0 / 261120 wr_cnt: 0 state: free 00:25:19.266 [2024-12-06 18:23:29.795466] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 81: 0 / 261120 wr_cnt: 0 state: free 00:25:19.266 [2024-12-06 18:23:29.795476] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 82: 0 / 261120 wr_cnt: 0 state: free 00:25:19.266 [2024-12-06 18:23:29.795486] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 83: 0 / 261120 wr_cnt: 0 state: free 00:25:19.266 [2024-12-06 18:23:29.795496] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 84: 0 / 261120 wr_cnt: 0 state: free 00:25:19.266 [2024-12-06 18:23:29.795506] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 85: 0 / 261120 wr_cnt: 0 state: free 00:25:19.266 [2024-12-06 18:23:29.795517] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 86: 0 / 261120 wr_cnt: 0 state: free 00:25:19.266 [2024-12-06 18:23:29.795527] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 87: 0 / 261120 wr_cnt: 0 state: free 00:25:19.266 [2024-12-06 18:23:29.795537] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 88: 0 / 261120 wr_cnt: 0 state: free 00:25:19.266 [2024-12-06 18:23:29.795547] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 89: 0 / 261120 wr_cnt: 0 state: free 00:25:19.266 [2024-12-06 18:23:29.795558] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 90: 0 / 261120 wr_cnt: 0 state: free 00:25:19.266 [2024-12-06 18:23:29.795568] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 91: 0 / 261120 wr_cnt: 0 state: free 00:25:19.266 [2024-12-06 18:23:29.795579] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 92: 0 / 261120 wr_cnt: 0 state: free 00:25:19.266 [2024-12-06 18:23:29.795589] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 93: 0 / 261120 wr_cnt: 0 state: free 00:25:19.266 [2024-12-06 18:23:29.795618] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 94: 0 / 261120 wr_cnt: 0 state: free 00:25:19.266 [2024-12-06 18:23:29.795629] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 95: 0 / 261120 wr_cnt: 0 state: free 00:25:19.266 [2024-12-06 18:23:29.795639] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 96: 0 / 261120 wr_cnt: 0 state: free 00:25:19.266 [2024-12-06 18:23:29.795650] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 97: 0 / 261120 wr_cnt: 0 state: free 00:25:19.266 [2024-12-06 18:23:29.795660] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 98: 0 / 261120 wr_cnt: 0 state: free 00:25:19.266 [2024-12-06 18:23:29.795671] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 99: 0 / 261120 wr_cnt: 0 state: free 00:25:19.266 [2024-12-06 18:23:29.795682] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 100: 0 / 261120 wr_cnt: 0 state: free 00:25:19.266 [2024-12-06 18:23:29.795700] ftl_debug.c: 211:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] 00:25:19.266 [2024-12-06 18:23:29.795710] ftl_debug.c: 212:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] device UUID: 0182d045-795a-443b-ad13-478c5d3e8b79 00:25:19.266 [2024-12-06 18:23:29.795721] ftl_debug.c: 213:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total valid LBAs: 0 00:25:19.266 [2024-12-06 18:23:29.795731] ftl_debug.c: 214:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total writes: 960 00:25:19.266 [2024-12-06 18:23:29.795741] ftl_debug.c: 215:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] user writes: 0 00:25:19.266 [2024-12-06 18:23:29.795751] ftl_debug.c: 216:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] WAF: inf 00:25:19.266 [2024-12-06 18:23:29.795761] ftl_debug.c: 218:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] limits: 00:25:19.266 [2024-12-06 18:23:29.795771] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] crit: 0 00:25:19.266 [2024-12-06 18:23:29.795787] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] high: 0 00:25:19.266 [2024-12-06 18:23:29.795796] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] low: 0 00:25:19.266 [2024-12-06 18:23:29.795805] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] start: 0 00:25:19.266 [2024-12-06 18:23:29.795815] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:19.266 [2024-12-06 18:23:29.795825] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Dump statistics 00:25:19.266 [2024-12-06 18:23:29.795835] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.231 ms 00:25:19.266 [2024-12-06 18:23:29.795845] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:19.266 [2024-12-06 18:23:29.815500] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:19.266 [2024-12-06 18:23:29.815534] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize L2P 00:25:19.266 [2024-12-06 18:23:29.815547] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 19.666 ms 00:25:19.266 [2024-12-06 18:23:29.815574] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:19.266 [2024-12-06 18:23:29.816170] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:19.266 [2024-12-06 18:23:29.816185] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize P2L checkpointing 00:25:19.266 [2024-12-06 18:23:29.816196] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.548 ms 00:25:19.266 [2024-12-06 18:23:29.816206] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:19.526 [2024-12-06 18:23:29.873006] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:25:19.526 [2024-12-06 18:23:29.873056] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:25:19.526 [2024-12-06 18:23:29.873070] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:25:19.526 [2024-12-06 18:23:29.873084] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:19.526 [2024-12-06 18:23:29.873208] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:25:19.526 [2024-12-06 18:23:29.873220] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:25:19.526 [2024-12-06 18:23:29.873231] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:25:19.526 [2024-12-06 18:23:29.873241] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:19.526 [2024-12-06 18:23:29.873308] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:25:19.526 [2024-12-06 18:23:29.873322] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:25:19.526 [2024-12-06 18:23:29.873333] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:25:19.526 [2024-12-06 18:23:29.873343] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:19.526 [2024-12-06 18:23:29.873368] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:25:19.526 [2024-12-06 18:23:29.873379] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:25:19.526 [2024-12-06 18:23:29.873389] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:25:19.526 [2024-12-06 18:23:29.873399] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:19.526 [2024-12-06 18:23:29.998170] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:25:19.526 [2024-12-06 18:23:29.998223] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:25:19.526 [2024-12-06 18:23:29.998254] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:25:19.526 [2024-12-06 18:23:29.998264] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:19.526 [2024-12-06 18:23:30.100102] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:25:19.526 [2024-12-06 18:23:30.100185] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:25:19.526 [2024-12-06 18:23:30.100200] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:25:19.526 [2024-12-06 18:23:30.100211] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:19.526 [2024-12-06 18:23:30.100325] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:25:19.526 [2024-12-06 18:23:30.100338] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:25:19.526 [2024-12-06 18:23:30.100349] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:25:19.526 [2024-12-06 18:23:30.100359] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:19.787 [2024-12-06 18:23:30.100388] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:25:19.787 [2024-12-06 18:23:30.100406] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:25:19.787 [2024-12-06 18:23:30.100416] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:25:19.787 [2024-12-06 18:23:30.100426] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:19.787 [2024-12-06 18:23:30.100528] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:25:19.787 [2024-12-06 18:23:30.100541] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:25:19.787 [2024-12-06 18:23:30.100551] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:25:19.787 [2024-12-06 18:23:30.100561] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:19.787 [2024-12-06 18:23:30.100598] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:25:19.787 [2024-12-06 18:23:30.100611] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize superblock 00:25:19.787 [2024-12-06 18:23:30.100627] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:25:19.787 [2024-12-06 18:23:30.100637] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:19.787 [2024-12-06 18:23:30.100675] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:25:19.787 [2024-12-06 18:23:30.100685] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:25:19.787 [2024-12-06 18:23:30.100696] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:25:19.787 [2024-12-06 18:23:30.100706] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:19.787 [2024-12-06 18:23:30.100749] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:25:19.787 [2024-12-06 18:23:30.100765] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:25:19.787 [2024-12-06 18:23:30.100775] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:25:19.787 [2024-12-06 18:23:30.100785] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:19.787 [2024-12-06 18:23:30.100923] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL shutdown', duration = 521.395 ms, result 0 00:25:20.760 00:25:20.760 00:25:20.760 18:23:31 ftl.ftl_trim -- ftl/trim.sh@86 -- # cmp --bytes=4194304 /home/vagrant/spdk_repo/spdk/test/ftl/data /dev/zero 00:25:20.760 18:23:31 ftl.ftl_trim -- ftl/trim.sh@87 -- # md5sum /home/vagrant/spdk_repo/spdk/test/ftl/data 00:25:21.328 18:23:31 ftl.ftl_trim -- ftl/trim.sh@90 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/ftl/random_pattern --ob=ftl0 --count=1024 --json=/home/vagrant/spdk_repo/spdk/test/ftl/config/ftl.json 00:25:21.328 [2024-12-06 18:23:31.719553] Starting SPDK v25.01-pre git sha1 1148849d6 / DPDK 24.03.0 initialization... 00:25:21.328 [2024-12-06 18:23:31.719693] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid78697 ] 00:25:21.328 [2024-12-06 18:23:31.900912] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:25:21.587 [2024-12-06 18:23:32.007423] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:25:21.846 [2024-12-06 18:23:32.375443] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:25:21.846 [2024-12-06 18:23:32.375524] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:25:22.106 [2024-12-06 18:23:32.537329] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:22.106 [2024-12-06 18:23:32.537385] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Check configuration 00:25:22.106 [2024-12-06 18:23:32.537400] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.005 ms 00:25:22.106 [2024-12-06 18:23:32.537412] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:22.106 [2024-12-06 18:23:32.540538] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:22.106 [2024-12-06 18:23:32.540584] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:25:22.106 [2024-12-06 18:23:32.540597] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.110 ms 00:25:22.106 [2024-12-06 18:23:32.540608] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:22.106 [2024-12-06 18:23:32.540707] mngt/ftl_mngt_bdev.c: 196:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using nvc0n1p0 as write buffer cache 00:25:22.106 [2024-12-06 18:23:32.541661] mngt/ftl_mngt_bdev.c: 236:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using bdev as NV Cache device 00:25:22.106 [2024-12-06 18:23:32.541691] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:22.106 [2024-12-06 18:23:32.541703] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:25:22.106 [2024-12-06 18:23:32.541714] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.993 ms 00:25:22.106 [2024-12-06 18:23:32.541724] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:22.106 [2024-12-06 18:23:32.543208] mngt/ftl_mngt_md.c: 455:ftl_mngt_load_sb: *NOTICE*: [FTL][ftl0] SHM: clean 0, shm_clean 0 00:25:22.106 [2024-12-06 18:23:32.562819] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:22.106 [2024-12-06 18:23:32.562863] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Load super block 00:25:22.106 [2024-12-06 18:23:32.562878] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 19.643 ms 00:25:22.106 [2024-12-06 18:23:32.562889] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:22.106 [2024-12-06 18:23:32.562996] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:22.106 [2024-12-06 18:23:32.563011] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Validate super block 00:25:22.106 [2024-12-06 18:23:32.563023] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.027 ms 00:25:22.106 [2024-12-06 18:23:32.563032] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:22.106 [2024-12-06 18:23:32.569615] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:22.106 [2024-12-06 18:23:32.569645] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:25:22.106 [2024-12-06 18:23:32.569674] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 6.547 ms 00:25:22.106 [2024-12-06 18:23:32.569684] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:22.106 [2024-12-06 18:23:32.569783] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:22.106 [2024-12-06 18:23:32.569798] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:25:22.106 [2024-12-06 18:23:32.569809] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.061 ms 00:25:22.106 [2024-12-06 18:23:32.569819] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:22.106 [2024-12-06 18:23:32.569852] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:22.106 [2024-12-06 18:23:32.569863] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Register IO device 00:25:22.106 [2024-12-06 18:23:32.569874] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.007 ms 00:25:22.106 [2024-12-06 18:23:32.569883] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:22.106 [2024-12-06 18:23:32.569908] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on ftl_core_thread 00:25:22.106 [2024-12-06 18:23:32.574919] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:22.106 [2024-12-06 18:23:32.574956] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:25:22.106 [2024-12-06 18:23:32.574968] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 5.026 ms 00:25:22.106 [2024-12-06 18:23:32.574978] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:22.106 [2024-12-06 18:23:32.575048] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:22.106 [2024-12-06 18:23:32.575062] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Decorate bands 00:25:22.106 [2024-12-06 18:23:32.575073] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.009 ms 00:25:22.106 [2024-12-06 18:23:32.575082] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:22.106 [2024-12-06 18:23:32.575110] ftl_layout.c: 613:ftl_layout_setup: *NOTICE*: [FTL][ftl0] FTL layout setup mode 0 00:25:22.106 [2024-12-06 18:23:32.575133] upgrade/ftl_sb_v5.c: 278:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob load 0x150 bytes 00:25:22.106 [2024-12-06 18:23:32.575168] upgrade/ftl_sb_v5.c: 287:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] base layout blob load 0x48 bytes 00:25:22.106 [2024-12-06 18:23:32.575186] upgrade/ftl_sb_v5.c: 294:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] layout blob load 0x190 bytes 00:25:22.106 [2024-12-06 18:23:32.575284] upgrade/ftl_sb_v5.c: 92:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob store 0x150 bytes 00:25:22.106 [2024-12-06 18:23:32.575298] upgrade/ftl_sb_v5.c: 101:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] base layout blob store 0x48 bytes 00:25:22.106 [2024-12-06 18:23:32.575311] upgrade/ftl_sb_v5.c: 109:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] layout blob store 0x190 bytes 00:25:22.106 [2024-12-06 18:23:32.575327] ftl_layout.c: 685:ftl_layout_setup: *NOTICE*: [FTL][ftl0] Base device capacity: 103424.00 MiB 00:25:22.106 [2024-12-06 18:23:32.575339] ftl_layout.c: 687:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache device capacity: 5171.00 MiB 00:25:22.106 [2024-12-06 18:23:32.575350] ftl_layout.c: 689:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P entries: 23592960 00:25:22.106 [2024-12-06 18:23:32.575360] ftl_layout.c: 690:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P address size: 4 00:25:22.106 [2024-12-06 18:23:32.575370] ftl_layout.c: 691:ftl_layout_setup: *NOTICE*: [FTL][ftl0] P2L checkpoint pages: 2048 00:25:22.106 [2024-12-06 18:23:32.575380] ftl_layout.c: 692:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache chunk count 5 00:25:22.106 [2024-12-06 18:23:32.575391] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:22.106 [2024-12-06 18:23:32.575402] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize layout 00:25:22.106 [2024-12-06 18:23:32.575412] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.283 ms 00:25:22.106 [2024-12-06 18:23:32.575421] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:22.106 [2024-12-06 18:23:32.575497] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:22.106 [2024-12-06 18:23:32.575528] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Verify layout 00:25:22.106 [2024-12-06 18:23:32.575539] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.055 ms 00:25:22.106 [2024-12-06 18:23:32.575549] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:22.106 [2024-12-06 18:23:32.575641] ftl_layout.c: 768:ftl_layout_dump: *NOTICE*: [FTL][ftl0] NV cache layout: 00:25:22.106 [2024-12-06 18:23:32.575655] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb 00:25:22.106 [2024-12-06 18:23:32.575666] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:25:22.106 [2024-12-06 18:23:32.575676] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:25:22.106 [2024-12-06 18:23:32.575687] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region l2p 00:25:22.106 [2024-12-06 18:23:32.575696] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.12 MiB 00:25:22.106 [2024-12-06 18:23:32.575705] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 90.00 MiB 00:25:22.106 [2024-12-06 18:23:32.575715] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md 00:25:22.106 [2024-12-06 18:23:32.575724] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 90.12 MiB 00:25:22.106 [2024-12-06 18:23:32.575733] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:25:22.106 [2024-12-06 18:23:32.575743] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md_mirror 00:25:22.106 [2024-12-06 18:23:32.575763] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 90.62 MiB 00:25:22.106 [2024-12-06 18:23:32.575772] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:25:22.106 [2024-12-06 18:23:32.575781] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md 00:25:22.106 [2024-12-06 18:23:32.575791] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.88 MiB 00:25:22.106 [2024-12-06 18:23:32.575800] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:25:22.106 [2024-12-06 18:23:32.575809] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md_mirror 00:25:22.106 [2024-12-06 18:23:32.575818] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 124.00 MiB 00:25:22.107 [2024-12-06 18:23:32.575828] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:25:22.107 [2024-12-06 18:23:32.575837] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l0 00:25:22.107 [2024-12-06 18:23:32.575847] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 91.12 MiB 00:25:22.107 [2024-12-06 18:23:32.575856] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:25:22.107 [2024-12-06 18:23:32.575865] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l1 00:25:22.107 [2024-12-06 18:23:32.575874] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 99.12 MiB 00:25:22.107 [2024-12-06 18:23:32.575883] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:25:22.107 [2024-12-06 18:23:32.575892] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l2 00:25:22.107 [2024-12-06 18:23:32.575901] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 107.12 MiB 00:25:22.107 [2024-12-06 18:23:32.575910] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:25:22.107 [2024-12-06 18:23:32.575919] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l3 00:25:22.107 [2024-12-06 18:23:32.575928] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 115.12 MiB 00:25:22.107 [2024-12-06 18:23:32.575937] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:25:22.107 [2024-12-06 18:23:32.575946] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md 00:25:22.107 [2024-12-06 18:23:32.575955] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.12 MiB 00:25:22.107 [2024-12-06 18:23:32.575963] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:25:22.107 [2024-12-06 18:23:32.575972] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md_mirror 00:25:22.107 [2024-12-06 18:23:32.575981] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.38 MiB 00:25:22.107 [2024-12-06 18:23:32.575990] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:25:22.107 [2024-12-06 18:23:32.576000] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log 00:25:22.107 [2024-12-06 18:23:32.576009] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.62 MiB 00:25:22.107 [2024-12-06 18:23:32.576018] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:25:22.107 [2024-12-06 18:23:32.576027] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log_mirror 00:25:22.107 [2024-12-06 18:23:32.576038] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.75 MiB 00:25:22.107 [2024-12-06 18:23:32.576047] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:25:22.107 [2024-12-06 18:23:32.576056] ftl_layout.c: 775:ftl_layout_dump: *NOTICE*: [FTL][ftl0] Base device layout: 00:25:22.107 [2024-12-06 18:23:32.576067] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb_mirror 00:25:22.107 [2024-12-06 18:23:32.576080] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:25:22.107 [2024-12-06 18:23:32.576090] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:25:22.107 [2024-12-06 18:23:32.576099] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region vmap 00:25:22.107 [2024-12-06 18:23:32.576109] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 102400.25 MiB 00:25:22.107 [2024-12-06 18:23:32.576118] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 3.38 MiB 00:25:22.107 [2024-12-06 18:23:32.576127] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region data_btm 00:25:22.107 [2024-12-06 18:23:32.576136] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.25 MiB 00:25:22.107 [2024-12-06 18:23:32.576146] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 102400.00 MiB 00:25:22.107 [2024-12-06 18:23:32.576156] upgrade/ftl_sb_v5.c: 408:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - nvc: 00:25:22.107 [2024-12-06 18:23:32.576168] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x0 ver:5 blk_offs:0x0 blk_sz:0x20 00:25:22.107 [2024-12-06 18:23:32.576179] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x2 ver:0 blk_offs:0x20 blk_sz:0x5a00 00:25:22.107 [2024-12-06 18:23:32.576189] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x3 ver:2 blk_offs:0x5a20 blk_sz:0x80 00:25:22.107 [2024-12-06 18:23:32.576199] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x4 ver:2 blk_offs:0x5aa0 blk_sz:0x80 00:25:22.107 [2024-12-06 18:23:32.576210] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xa ver:2 blk_offs:0x5b20 blk_sz:0x800 00:25:22.107 [2024-12-06 18:23:32.576220] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xb ver:2 blk_offs:0x6320 blk_sz:0x800 00:25:22.107 [2024-12-06 18:23:32.576230] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xc ver:2 blk_offs:0x6b20 blk_sz:0x800 00:25:22.107 [2024-12-06 18:23:32.576239] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xd ver:2 blk_offs:0x7320 blk_sz:0x800 00:25:22.107 [2024-12-06 18:23:32.576249] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xe ver:0 blk_offs:0x7b20 blk_sz:0x40 00:25:22.107 [2024-12-06 18:23:32.576259] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xf ver:0 blk_offs:0x7b60 blk_sz:0x40 00:25:22.107 [2024-12-06 18:23:32.576280] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x10 ver:1 blk_offs:0x7ba0 blk_sz:0x20 00:25:22.107 [2024-12-06 18:23:32.576291] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x11 ver:1 blk_offs:0x7bc0 blk_sz:0x20 00:25:22.107 [2024-12-06 18:23:32.576301] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x6 ver:2 blk_offs:0x7be0 blk_sz:0x20 00:25:22.107 [2024-12-06 18:23:32.576311] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x7 ver:2 blk_offs:0x7c00 blk_sz:0x20 00:25:22.107 [2024-12-06 18:23:32.576322] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x7c20 blk_sz:0x13b6e0 00:25:22.107 [2024-12-06 18:23:32.576332] upgrade/ftl_sb_v5.c: 422:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - base dev: 00:25:22.107 [2024-12-06 18:23:32.576344] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x1 ver:5 blk_offs:0x0 blk_sz:0x20 00:25:22.107 [2024-12-06 18:23:32.576355] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x20 blk_sz:0x20 00:25:22.107 [2024-12-06 18:23:32.576366] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x9 ver:0 blk_offs:0x40 blk_sz:0x1900000 00:25:22.107 [2024-12-06 18:23:32.576376] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x5 ver:0 blk_offs:0x1900040 blk_sz:0x360 00:25:22.107 [2024-12-06 18:23:32.576388] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x19003a0 blk_sz:0x3fc60 00:25:22.107 [2024-12-06 18:23:32.576399] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:22.107 [2024-12-06 18:23:32.576414] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Layout upgrade 00:25:22.107 [2024-12-06 18:23:32.576424] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.813 ms 00:25:22.107 [2024-12-06 18:23:32.576433] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:22.107 [2024-12-06 18:23:32.615253] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:22.107 [2024-12-06 18:23:32.615315] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:25:22.107 [2024-12-06 18:23:32.615331] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 38.819 ms 00:25:22.107 [2024-12-06 18:23:32.615341] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:22.107 [2024-12-06 18:23:32.615491] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:22.107 [2024-12-06 18:23:32.615504] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize band addresses 00:25:22.107 [2024-12-06 18:23:32.615515] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.054 ms 00:25:22.107 [2024-12-06 18:23:32.615525] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:22.107 [2024-12-06 18:23:32.674175] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:22.107 [2024-12-06 18:23:32.674230] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:25:22.107 [2024-12-06 18:23:32.674249] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 58.719 ms 00:25:22.107 [2024-12-06 18:23:32.674260] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:22.107 [2024-12-06 18:23:32.674403] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:22.107 [2024-12-06 18:23:32.674417] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:25:22.107 [2024-12-06 18:23:32.674429] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.003 ms 00:25:22.107 [2024-12-06 18:23:32.674439] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:22.107 [2024-12-06 18:23:32.674875] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:22.107 [2024-12-06 18:23:32.674888] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:25:22.107 [2024-12-06 18:23:32.674905] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.412 ms 00:25:22.107 [2024-12-06 18:23:32.674915] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:22.107 [2024-12-06 18:23:32.675032] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:22.107 [2024-12-06 18:23:32.675046] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:25:22.107 [2024-12-06 18:23:32.675056] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.095 ms 00:25:22.107 [2024-12-06 18:23:32.675066] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:22.367 [2024-12-06 18:23:32.694714] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:22.367 [2024-12-06 18:23:32.694759] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:25:22.367 [2024-12-06 18:23:32.694773] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 19.656 ms 00:25:22.367 [2024-12-06 18:23:32.694785] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:22.367 [2024-12-06 18:23:32.715134] ftl_nv_cache.c:1772:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: full chunks = 1, empty chunks = 3 00:25:22.367 [2024-12-06 18:23:32.715177] ftl_nv_cache.c:1776:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: state loaded successfully 00:25:22.367 [2024-12-06 18:23:32.715192] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:22.367 [2024-12-06 18:23:32.715203] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore NV cache metadata 00:25:22.367 [2024-12-06 18:23:32.715215] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 20.319 ms 00:25:22.367 [2024-12-06 18:23:32.715225] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:22.367 [2024-12-06 18:23:32.745602] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:22.367 [2024-12-06 18:23:32.745651] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore valid map metadata 00:25:22.367 [2024-12-06 18:23:32.745665] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 30.331 ms 00:25:22.367 [2024-12-06 18:23:32.745676] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:22.367 [2024-12-06 18:23:32.765397] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:22.367 [2024-12-06 18:23:32.765446] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore band info metadata 00:25:22.367 [2024-12-06 18:23:32.765461] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 19.662 ms 00:25:22.367 [2024-12-06 18:23:32.765471] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:22.367 [2024-12-06 18:23:32.783799] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:22.367 [2024-12-06 18:23:32.783840] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore trim metadata 00:25:22.367 [2024-12-06 18:23:32.783853] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 18.271 ms 00:25:22.367 [2024-12-06 18:23:32.783863] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:22.367 [2024-12-06 18:23:32.784676] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:22.367 [2024-12-06 18:23:32.784707] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize P2L checkpointing 00:25:22.367 [2024-12-06 18:23:32.784719] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.692 ms 00:25:22.367 [2024-12-06 18:23:32.784730] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:22.367 [2024-12-06 18:23:32.870721] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:22.367 [2024-12-06 18:23:32.870787] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore P2L checkpoints 00:25:22.367 [2024-12-06 18:23:32.870804] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 86.098 ms 00:25:22.367 [2024-12-06 18:23:32.870815] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:22.367 [2024-12-06 18:23:32.882147] ftl_l2p_cache.c: 458:ftl_l2p_cache_init: *NOTICE*: l2p maximum resident size is: 59 (of 60) MiB 00:25:22.367 [2024-12-06 18:23:32.898539] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:22.367 [2024-12-06 18:23:32.898590] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize L2P 00:25:22.367 [2024-12-06 18:23:32.898606] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 27.626 ms 00:25:22.367 [2024-12-06 18:23:32.898623] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:22.367 [2024-12-06 18:23:32.898759] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:22.367 [2024-12-06 18:23:32.898773] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore L2P 00:25:22.367 [2024-12-06 18:23:32.898784] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.006 ms 00:25:22.367 [2024-12-06 18:23:32.898795] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:22.367 [2024-12-06 18:23:32.898848] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:22.367 [2024-12-06 18:23:32.898860] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize band initialization 00:25:22.367 [2024-12-06 18:23:32.898871] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.032 ms 00:25:22.367 [2024-12-06 18:23:32.898885] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:22.367 [2024-12-06 18:23:32.898918] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:22.367 [2024-12-06 18:23:32.898932] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Start core poller 00:25:22.367 [2024-12-06 18:23:32.898942] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.012 ms 00:25:22.367 [2024-12-06 18:23:32.898952] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:22.367 [2024-12-06 18:23:32.898989] mngt/ftl_mngt_self_test.c: 208:ftl_mngt_self_test: *NOTICE*: [FTL][ftl0] Self test skipped 00:25:22.367 [2024-12-06 18:23:32.899001] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:22.367 [2024-12-06 18:23:32.899011] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Self test on startup 00:25:22.367 [2024-12-06 18:23:32.899021] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.012 ms 00:25:22.367 [2024-12-06 18:23:32.899031] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:22.367 [2024-12-06 18:23:32.936794] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:22.367 [2024-12-06 18:23:32.936855] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL dirty state 00:25:22.367 [2024-12-06 18:23:32.936872] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 37.800 ms 00:25:22.367 [2024-12-06 18:23:32.936884] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:22.367 [2024-12-06 18:23:32.937000] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:22.367 [2024-12-06 18:23:32.937014] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize initialization 00:25:22.367 [2024-12-06 18:23:32.937025] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.032 ms 00:25:22.367 [2024-12-06 18:23:32.937036] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:22.367 [2024-12-06 18:23:32.937972] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:25:22.627 [2024-12-06 18:23:32.942493] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL startup', duration = 401.012 ms, result 0 00:25:22.627 [2024-12-06 18:23:32.943290] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:25:22.627 [2024-12-06 18:23:32.962045] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:25:22.627  [2024-12-06T18:23:33.203Z] Copying: 4096/4096 [kB] (average 25 MBps)[2024-12-06 18:23:33.122012] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:25:22.627 [2024-12-06 18:23:33.136392] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:22.627 [2024-12-06 18:23:33.136447] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinit core IO channel 00:25:22.627 [2024-12-06 18:23:33.136468] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.003 ms 00:25:22.627 [2024-12-06 18:23:33.136479] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:22.627 [2024-12-06 18:23:33.136505] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on ftl_core_thread 00:25:22.627 [2024-12-06 18:23:33.140486] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:22.627 [2024-12-06 18:23:33.140520] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Unregister IO device 00:25:22.627 [2024-12-06 18:23:33.140532] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.970 ms 00:25:22.627 [2024-12-06 18:23:33.140543] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:22.627 [2024-12-06 18:23:33.142584] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:22.627 [2024-12-06 18:23:33.142638] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Stop core poller 00:25:22.627 [2024-12-06 18:23:33.142651] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.019 ms 00:25:22.627 [2024-12-06 18:23:33.142662] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:22.627 [2024-12-06 18:23:33.145963] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:22.627 [2024-12-06 18:23:33.145993] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist L2P 00:25:22.627 [2024-12-06 18:23:33.146005] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.282 ms 00:25:22.627 [2024-12-06 18:23:33.146015] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:22.627 [2024-12-06 18:23:33.151705] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:22.627 [2024-12-06 18:23:33.151742] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finish L2P trims 00:25:22.627 [2024-12-06 18:23:33.151755] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 5.668 ms 00:25:22.627 [2024-12-06 18:23:33.151764] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:22.627 [2024-12-06 18:23:33.188241] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:22.627 [2024-12-06 18:23:33.188301] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist NV cache metadata 00:25:22.627 [2024-12-06 18:23:33.188317] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 36.464 ms 00:25:22.627 [2024-12-06 18:23:33.188328] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:22.887 [2024-12-06 18:23:33.209276] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:22.887 [2024-12-06 18:23:33.209339] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist valid map metadata 00:25:22.887 [2024-12-06 18:23:33.209355] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 20.915 ms 00:25:22.887 [2024-12-06 18:23:33.209366] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:22.887 [2024-12-06 18:23:33.209544] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:22.887 [2024-12-06 18:23:33.209558] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist P2L metadata 00:25:22.887 [2024-12-06 18:23:33.209580] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.084 ms 00:25:22.888 [2024-12-06 18:23:33.209590] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:22.888 [2024-12-06 18:23:33.246825] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:22.888 [2024-12-06 18:23:33.246889] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist band info metadata 00:25:22.888 [2024-12-06 18:23:33.246906] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 37.260 ms 00:25:22.888 [2024-12-06 18:23:33.246917] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:22.888 [2024-12-06 18:23:33.283581] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:22.888 [2024-12-06 18:23:33.283628] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist trim metadata 00:25:22.888 [2024-12-06 18:23:33.283642] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 36.663 ms 00:25:22.888 [2024-12-06 18:23:33.283653] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:22.888 [2024-12-06 18:23:33.319976] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:22.888 [2024-12-06 18:23:33.320019] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist superblock 00:25:22.888 [2024-12-06 18:23:33.320033] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 36.323 ms 00:25:22.888 [2024-12-06 18:23:33.320043] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:22.888 [2024-12-06 18:23:33.355683] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:22.888 [2024-12-06 18:23:33.355726] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL clean state 00:25:22.888 [2024-12-06 18:23:33.355740] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 35.585 ms 00:25:22.888 [2024-12-06 18:23:33.355750] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:22.888 [2024-12-06 18:23:33.355808] ftl_debug.c: 165:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Bands validity: 00:25:22.888 [2024-12-06 18:23:33.355826] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 1: 0 / 261120 wr_cnt: 0 state: free 00:25:22.888 [2024-12-06 18:23:33.355839] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 2: 0 / 261120 wr_cnt: 0 state: free 00:25:22.888 [2024-12-06 18:23:33.355851] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 3: 0 / 261120 wr_cnt: 0 state: free 00:25:22.888 [2024-12-06 18:23:33.355861] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 4: 0 / 261120 wr_cnt: 0 state: free 00:25:22.888 [2024-12-06 18:23:33.355872] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 5: 0 / 261120 wr_cnt: 0 state: free 00:25:22.888 [2024-12-06 18:23:33.355882] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 6: 0 / 261120 wr_cnt: 0 state: free 00:25:22.888 [2024-12-06 18:23:33.355893] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 7: 0 / 261120 wr_cnt: 0 state: free 00:25:22.888 [2024-12-06 18:23:33.355904] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 8: 0 / 261120 wr_cnt: 0 state: free 00:25:22.888 [2024-12-06 18:23:33.355914] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 9: 0 / 261120 wr_cnt: 0 state: free 00:25:22.888 [2024-12-06 18:23:33.355925] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 10: 0 / 261120 wr_cnt: 0 state: free 00:25:22.888 [2024-12-06 18:23:33.355936] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 11: 0 / 261120 wr_cnt: 0 state: free 00:25:22.888 [2024-12-06 18:23:33.355946] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 12: 0 / 261120 wr_cnt: 0 state: free 00:25:22.888 [2024-12-06 18:23:33.355956] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 13: 0 / 261120 wr_cnt: 0 state: free 00:25:22.888 [2024-12-06 18:23:33.355966] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 14: 0 / 261120 wr_cnt: 0 state: free 00:25:22.888 [2024-12-06 18:23:33.355976] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 15: 0 / 261120 wr_cnt: 0 state: free 00:25:22.888 [2024-12-06 18:23:33.355986] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 16: 0 / 261120 wr_cnt: 0 state: free 00:25:22.888 [2024-12-06 18:23:33.355996] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 17: 0 / 261120 wr_cnt: 0 state: free 00:25:22.888 [2024-12-06 18:23:33.356007] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 18: 0 / 261120 wr_cnt: 0 state: free 00:25:22.888 [2024-12-06 18:23:33.356016] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 19: 0 / 261120 wr_cnt: 0 state: free 00:25:22.888 [2024-12-06 18:23:33.356027] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 20: 0 / 261120 wr_cnt: 0 state: free 00:25:22.888 [2024-12-06 18:23:33.356037] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 21: 0 / 261120 wr_cnt: 0 state: free 00:25:22.888 [2024-12-06 18:23:33.356047] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 22: 0 / 261120 wr_cnt: 0 state: free 00:25:22.888 [2024-12-06 18:23:33.356058] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 23: 0 / 261120 wr_cnt: 0 state: free 00:25:22.888 [2024-12-06 18:23:33.356068] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 24: 0 / 261120 wr_cnt: 0 state: free 00:25:22.888 [2024-12-06 18:23:33.356078] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 25: 0 / 261120 wr_cnt: 0 state: free 00:25:22.888 [2024-12-06 18:23:33.356089] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 26: 0 / 261120 wr_cnt: 0 state: free 00:25:22.888 [2024-12-06 18:23:33.356099] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 27: 0 / 261120 wr_cnt: 0 state: free 00:25:22.888 [2024-12-06 18:23:33.356109] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 28: 0 / 261120 wr_cnt: 0 state: free 00:25:22.888 [2024-12-06 18:23:33.356119] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 29: 0 / 261120 wr_cnt: 0 state: free 00:25:22.888 [2024-12-06 18:23:33.356129] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 30: 0 / 261120 wr_cnt: 0 state: free 00:25:22.888 [2024-12-06 18:23:33.356141] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 31: 0 / 261120 wr_cnt: 0 state: free 00:25:22.888 [2024-12-06 18:23:33.356152] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 32: 0 / 261120 wr_cnt: 0 state: free 00:25:22.888 [2024-12-06 18:23:33.356164] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 33: 0 / 261120 wr_cnt: 0 state: free 00:25:22.888 [2024-12-06 18:23:33.356175] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 34: 0 / 261120 wr_cnt: 0 state: free 00:25:22.888 [2024-12-06 18:23:33.356186] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 35: 0 / 261120 wr_cnt: 0 state: free 00:25:22.888 [2024-12-06 18:23:33.356197] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 36: 0 / 261120 wr_cnt: 0 state: free 00:25:22.888 [2024-12-06 18:23:33.356207] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 37: 0 / 261120 wr_cnt: 0 state: free 00:25:22.888 [2024-12-06 18:23:33.356218] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 38: 0 / 261120 wr_cnt: 0 state: free 00:25:22.888 [2024-12-06 18:23:33.356229] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 39: 0 / 261120 wr_cnt: 0 state: free 00:25:22.888 [2024-12-06 18:23:33.356239] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 40: 0 / 261120 wr_cnt: 0 state: free 00:25:22.888 [2024-12-06 18:23:33.356250] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 41: 0 / 261120 wr_cnt: 0 state: free 00:25:22.888 [2024-12-06 18:23:33.356260] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 42: 0 / 261120 wr_cnt: 0 state: free 00:25:22.888 [2024-12-06 18:23:33.356281] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 43: 0 / 261120 wr_cnt: 0 state: free 00:25:22.888 [2024-12-06 18:23:33.356292] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 44: 0 / 261120 wr_cnt: 0 state: free 00:25:22.888 [2024-12-06 18:23:33.356302] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 45: 0 / 261120 wr_cnt: 0 state: free 00:25:22.888 [2024-12-06 18:23:33.356312] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 46: 0 / 261120 wr_cnt: 0 state: free 00:25:22.888 [2024-12-06 18:23:33.356322] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 47: 0 / 261120 wr_cnt: 0 state: free 00:25:22.888 [2024-12-06 18:23:33.356333] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 48: 0 / 261120 wr_cnt: 0 state: free 00:25:22.888 [2024-12-06 18:23:33.356343] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 49: 0 / 261120 wr_cnt: 0 state: free 00:25:22.888 [2024-12-06 18:23:33.356354] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 50: 0 / 261120 wr_cnt: 0 state: free 00:25:22.888 [2024-12-06 18:23:33.356364] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 51: 0 / 261120 wr_cnt: 0 state: free 00:25:22.888 [2024-12-06 18:23:33.356374] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 52: 0 / 261120 wr_cnt: 0 state: free 00:25:22.888 [2024-12-06 18:23:33.356385] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 53: 0 / 261120 wr_cnt: 0 state: free 00:25:22.888 [2024-12-06 18:23:33.356395] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 54: 0 / 261120 wr_cnt: 0 state: free 00:25:22.888 [2024-12-06 18:23:33.356405] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 55: 0 / 261120 wr_cnt: 0 state: free 00:25:22.888 [2024-12-06 18:23:33.356415] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 56: 0 / 261120 wr_cnt: 0 state: free 00:25:22.888 [2024-12-06 18:23:33.356425] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 57: 0 / 261120 wr_cnt: 0 state: free 00:25:22.888 [2024-12-06 18:23:33.356435] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 58: 0 / 261120 wr_cnt: 0 state: free 00:25:22.888 [2024-12-06 18:23:33.356446] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 59: 0 / 261120 wr_cnt: 0 state: free 00:25:22.888 [2024-12-06 18:23:33.356456] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 60: 0 / 261120 wr_cnt: 0 state: free 00:25:22.888 [2024-12-06 18:23:33.356466] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 61: 0 / 261120 wr_cnt: 0 state: free 00:25:22.888 [2024-12-06 18:23:33.356477] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 62: 0 / 261120 wr_cnt: 0 state: free 00:25:22.888 [2024-12-06 18:23:33.356496] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 63: 0 / 261120 wr_cnt: 0 state: free 00:25:22.888 [2024-12-06 18:23:33.356507] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 64: 0 / 261120 wr_cnt: 0 state: free 00:25:22.888 [2024-12-06 18:23:33.356518] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 65: 0 / 261120 wr_cnt: 0 state: free 00:25:22.888 [2024-12-06 18:23:33.356529] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 66: 0 / 261120 wr_cnt: 0 state: free 00:25:22.888 [2024-12-06 18:23:33.356539] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 67: 0 / 261120 wr_cnt: 0 state: free 00:25:22.888 [2024-12-06 18:23:33.356549] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 68: 0 / 261120 wr_cnt: 0 state: free 00:25:22.888 [2024-12-06 18:23:33.356560] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 69: 0 / 261120 wr_cnt: 0 state: free 00:25:22.888 [2024-12-06 18:23:33.356570] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 70: 0 / 261120 wr_cnt: 0 state: free 00:25:22.888 [2024-12-06 18:23:33.356581] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 71: 0 / 261120 wr_cnt: 0 state: free 00:25:22.888 [2024-12-06 18:23:33.356591] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 72: 0 / 261120 wr_cnt: 0 state: free 00:25:22.888 [2024-12-06 18:23:33.356602] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 73: 0 / 261120 wr_cnt: 0 state: free 00:25:22.889 [2024-12-06 18:23:33.356612] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 74: 0 / 261120 wr_cnt: 0 state: free 00:25:22.889 [2024-12-06 18:23:33.356622] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 75: 0 / 261120 wr_cnt: 0 state: free 00:25:22.889 [2024-12-06 18:23:33.356632] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 76: 0 / 261120 wr_cnt: 0 state: free 00:25:22.889 [2024-12-06 18:23:33.356642] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 77: 0 / 261120 wr_cnt: 0 state: free 00:25:22.889 [2024-12-06 18:23:33.356652] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 78: 0 / 261120 wr_cnt: 0 state: free 00:25:22.889 [2024-12-06 18:23:33.356663] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 79: 0 / 261120 wr_cnt: 0 state: free 00:25:22.889 [2024-12-06 18:23:33.356673] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 80: 0 / 261120 wr_cnt: 0 state: free 00:25:22.889 [2024-12-06 18:23:33.356683] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 81: 0 / 261120 wr_cnt: 0 state: free 00:25:22.889 [2024-12-06 18:23:33.356693] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 82: 0 / 261120 wr_cnt: 0 state: free 00:25:22.889 [2024-12-06 18:23:33.356704] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 83: 0 / 261120 wr_cnt: 0 state: free 00:25:22.889 [2024-12-06 18:23:33.356715] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 84: 0 / 261120 wr_cnt: 0 state: free 00:25:22.889 [2024-12-06 18:23:33.356725] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 85: 0 / 261120 wr_cnt: 0 state: free 00:25:22.889 [2024-12-06 18:23:33.356736] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 86: 0 / 261120 wr_cnt: 0 state: free 00:25:22.889 [2024-12-06 18:23:33.356747] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 87: 0 / 261120 wr_cnt: 0 state: free 00:25:22.889 [2024-12-06 18:23:33.356757] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 88: 0 / 261120 wr_cnt: 0 state: free 00:25:22.889 [2024-12-06 18:23:33.356768] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 89: 0 / 261120 wr_cnt: 0 state: free 00:25:22.889 [2024-12-06 18:23:33.356779] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 90: 0 / 261120 wr_cnt: 0 state: free 00:25:22.889 [2024-12-06 18:23:33.356789] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 91: 0 / 261120 wr_cnt: 0 state: free 00:25:22.889 [2024-12-06 18:23:33.356800] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 92: 0 / 261120 wr_cnt: 0 state: free 00:25:22.889 [2024-12-06 18:23:33.356810] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 93: 0 / 261120 wr_cnt: 0 state: free 00:25:22.889 [2024-12-06 18:23:33.356833] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 94: 0 / 261120 wr_cnt: 0 state: free 00:25:22.889 [2024-12-06 18:23:33.356845] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 95: 0 / 261120 wr_cnt: 0 state: free 00:25:22.889 [2024-12-06 18:23:33.356855] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 96: 0 / 261120 wr_cnt: 0 state: free 00:25:22.889 [2024-12-06 18:23:33.356866] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 97: 0 / 261120 wr_cnt: 0 state: free 00:25:22.889 [2024-12-06 18:23:33.356876] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 98: 0 / 261120 wr_cnt: 0 state: free 00:25:22.889 [2024-12-06 18:23:33.356887] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 99: 0 / 261120 wr_cnt: 0 state: free 00:25:22.889 [2024-12-06 18:23:33.356897] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 100: 0 / 261120 wr_cnt: 0 state: free 00:25:22.889 [2024-12-06 18:23:33.356916] ftl_debug.c: 211:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] 00:25:22.889 [2024-12-06 18:23:33.356926] ftl_debug.c: 212:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] device UUID: 0182d045-795a-443b-ad13-478c5d3e8b79 00:25:22.889 [2024-12-06 18:23:33.356937] ftl_debug.c: 213:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total valid LBAs: 0 00:25:22.889 [2024-12-06 18:23:33.356947] ftl_debug.c: 214:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total writes: 960 00:25:22.889 [2024-12-06 18:23:33.356957] ftl_debug.c: 215:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] user writes: 0 00:25:22.889 [2024-12-06 18:23:33.356967] ftl_debug.c: 216:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] WAF: inf 00:25:22.889 [2024-12-06 18:23:33.356976] ftl_debug.c: 218:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] limits: 00:25:22.889 [2024-12-06 18:23:33.356987] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] crit: 0 00:25:22.889 [2024-12-06 18:23:33.357001] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] high: 0 00:25:22.889 [2024-12-06 18:23:33.357010] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] low: 0 00:25:22.889 [2024-12-06 18:23:33.357019] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] start: 0 00:25:22.889 [2024-12-06 18:23:33.357028] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:22.889 [2024-12-06 18:23:33.357038] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Dump statistics 00:25:22.889 [2024-12-06 18:23:33.357049] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.224 ms 00:25:22.889 [2024-12-06 18:23:33.357060] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:22.889 [2024-12-06 18:23:33.377035] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:22.889 [2024-12-06 18:23:33.377091] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize L2P 00:25:22.889 [2024-12-06 18:23:33.377105] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 19.984 ms 00:25:22.889 [2024-12-06 18:23:33.377117] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:22.889 [2024-12-06 18:23:33.377710] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:22.889 [2024-12-06 18:23:33.377722] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize P2L checkpointing 00:25:22.889 [2024-12-06 18:23:33.377733] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.526 ms 00:25:22.889 [2024-12-06 18:23:33.377743] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:22.889 [2024-12-06 18:23:33.433349] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:25:22.889 [2024-12-06 18:23:33.433401] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:25:22.889 [2024-12-06 18:23:33.433415] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:25:22.889 [2024-12-06 18:23:33.433430] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:22.889 [2024-12-06 18:23:33.433548] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:25:22.889 [2024-12-06 18:23:33.433560] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:25:22.889 [2024-12-06 18:23:33.433570] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:25:22.889 [2024-12-06 18:23:33.433580] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:22.889 [2024-12-06 18:23:33.433634] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:25:22.889 [2024-12-06 18:23:33.433647] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:25:22.889 [2024-12-06 18:23:33.433658] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:25:22.889 [2024-12-06 18:23:33.433667] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:22.889 [2024-12-06 18:23:33.433690] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:25:22.889 [2024-12-06 18:23:33.433701] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:25:22.889 [2024-12-06 18:23:33.433711] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:25:22.889 [2024-12-06 18:23:33.433720] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:23.149 [2024-12-06 18:23:33.558376] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:25:23.149 [2024-12-06 18:23:33.558462] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:25:23.149 [2024-12-06 18:23:33.558478] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:25:23.149 [2024-12-06 18:23:33.558495] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:23.149 [2024-12-06 18:23:33.661041] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:25:23.149 [2024-12-06 18:23:33.661106] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:25:23.149 [2024-12-06 18:23:33.661121] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:25:23.149 [2024-12-06 18:23:33.661132] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:23.149 [2024-12-06 18:23:33.661225] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:25:23.149 [2024-12-06 18:23:33.661237] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:25:23.149 [2024-12-06 18:23:33.661248] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:25:23.149 [2024-12-06 18:23:33.661259] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:23.149 [2024-12-06 18:23:33.661302] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:25:23.149 [2024-12-06 18:23:33.661319] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:25:23.149 [2024-12-06 18:23:33.661329] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:25:23.149 [2024-12-06 18:23:33.661339] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:23.149 [2024-12-06 18:23:33.661455] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:25:23.149 [2024-12-06 18:23:33.661482] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:25:23.149 [2024-12-06 18:23:33.661493] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:25:23.149 [2024-12-06 18:23:33.661502] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:23.149 [2024-12-06 18:23:33.661540] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:25:23.149 [2024-12-06 18:23:33.661552] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize superblock 00:25:23.149 [2024-12-06 18:23:33.661566] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:25:23.149 [2024-12-06 18:23:33.661576] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:23.149 [2024-12-06 18:23:33.661613] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:25:23.149 [2024-12-06 18:23:33.661624] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:25:23.149 [2024-12-06 18:23:33.661634] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:25:23.149 [2024-12-06 18:23:33.661644] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:23.149 [2024-12-06 18:23:33.661686] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:25:23.149 [2024-12-06 18:23:33.661701] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:25:23.149 [2024-12-06 18:23:33.661711] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:25:23.149 [2024-12-06 18:23:33.661722] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:23.149 [2024-12-06 18:23:33.661857] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL shutdown', duration = 526.313 ms, result 0 00:25:24.527 00:25:24.527 00:25:24.527 18:23:34 ftl.ftl_trim -- ftl/trim.sh@93 -- # svcpid=78733 00:25:24.527 18:23:34 ftl.ftl_trim -- ftl/trim.sh@94 -- # waitforlisten 78733 00:25:24.527 18:23:34 ftl.ftl_trim -- common/autotest_common.sh@835 -- # '[' -z 78733 ']' 00:25:24.527 18:23:34 ftl.ftl_trim -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:25:24.527 18:23:34 ftl.ftl_trim -- ftl/trim.sh@92 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -L ftl_init 00:25:24.527 18:23:34 ftl.ftl_trim -- common/autotest_common.sh@840 -- # local max_retries=100 00:25:24.527 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:25:24.527 18:23:34 ftl.ftl_trim -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:25:24.527 18:23:34 ftl.ftl_trim -- common/autotest_common.sh@844 -- # xtrace_disable 00:25:24.527 18:23:34 ftl.ftl_trim -- common/autotest_common.sh@10 -- # set +x 00:25:24.527 [2024-12-06 18:23:34.848956] Starting SPDK v25.01-pre git sha1 1148849d6 / DPDK 24.03.0 initialization... 00:25:24.527 [2024-12-06 18:23:34.849309] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid78733 ] 00:25:24.527 [2024-12-06 18:23:35.030519] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:25:24.785 [2024-12-06 18:23:35.146808] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:25:25.776 18:23:36 ftl.ftl_trim -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:25:25.776 18:23:36 ftl.ftl_trim -- common/autotest_common.sh@868 -- # return 0 00:25:25.776 18:23:36 ftl.ftl_trim -- ftl/trim.sh@96 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py load_config 00:25:25.776 [2024-12-06 18:23:36.214124] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:25:25.776 [2024-12-06 18:23:36.214197] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:25:26.035 [2024-12-06 18:23:36.402419] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:26.035 [2024-12-06 18:23:36.402481] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Check configuration 00:25:26.035 [2024-12-06 18:23:36.402502] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.006 ms 00:25:26.035 [2024-12-06 18:23:36.402513] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:26.035 [2024-12-06 18:23:36.406512] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:26.035 [2024-12-06 18:23:36.406553] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:25:26.035 [2024-12-06 18:23:36.406569] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.982 ms 00:25:26.035 [2024-12-06 18:23:36.406579] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:26.035 [2024-12-06 18:23:36.406684] mngt/ftl_mngt_bdev.c: 196:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using nvc0n1p0 as write buffer cache 00:25:26.035 [2024-12-06 18:23:36.407715] mngt/ftl_mngt_bdev.c: 236:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using bdev as NV Cache device 00:25:26.035 [2024-12-06 18:23:36.407746] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:26.035 [2024-12-06 18:23:36.407757] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:25:26.035 [2024-12-06 18:23:36.407770] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.075 ms 00:25:26.035 [2024-12-06 18:23:36.407781] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:26.035 [2024-12-06 18:23:36.409210] mngt/ftl_mngt_md.c: 455:ftl_mngt_load_sb: *NOTICE*: [FTL][ftl0] SHM: clean 0, shm_clean 0 00:25:26.035 [2024-12-06 18:23:36.428638] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:26.035 [2024-12-06 18:23:36.428685] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Load super block 00:25:26.035 [2024-12-06 18:23:36.428700] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 19.463 ms 00:25:26.035 [2024-12-06 18:23:36.428715] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:26.035 [2024-12-06 18:23:36.428817] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:26.035 [2024-12-06 18:23:36.428852] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Validate super block 00:25:26.035 [2024-12-06 18:23:36.428864] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.029 ms 00:25:26.035 [2024-12-06 18:23:36.428879] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:26.035 [2024-12-06 18:23:36.435550] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:26.035 [2024-12-06 18:23:36.435738] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:25:26.035 [2024-12-06 18:23:36.435759] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 6.625 ms 00:25:26.035 [2024-12-06 18:23:36.435779] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:26.035 [2024-12-06 18:23:36.435918] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:26.035 [2024-12-06 18:23:36.435939] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:25:26.035 [2024-12-06 18:23:36.435951] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.098 ms 00:25:26.035 [2024-12-06 18:23:36.435973] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:26.035 [2024-12-06 18:23:36.435999] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:26.035 [2024-12-06 18:23:36.436016] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Register IO device 00:25:26.035 [2024-12-06 18:23:36.436026] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.006 ms 00:25:26.035 [2024-12-06 18:23:36.436041] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:26.035 [2024-12-06 18:23:36.436066] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on ftl_core_thread 00:25:26.035 [2024-12-06 18:23:36.440985] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:26.035 [2024-12-06 18:23:36.441018] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:25:26.035 [2024-12-06 18:23:36.441036] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.927 ms 00:25:26.035 [2024-12-06 18:23:36.441062] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:26.035 [2024-12-06 18:23:36.441142] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:26.035 [2024-12-06 18:23:36.441155] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Decorate bands 00:25:26.035 [2024-12-06 18:23:36.441171] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.009 ms 00:25:26.035 [2024-12-06 18:23:36.441187] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:26.035 [2024-12-06 18:23:36.441213] ftl_layout.c: 613:ftl_layout_setup: *NOTICE*: [FTL][ftl0] FTL layout setup mode 0 00:25:26.035 [2024-12-06 18:23:36.441240] upgrade/ftl_sb_v5.c: 278:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob load 0x150 bytes 00:25:26.035 [2024-12-06 18:23:36.441311] upgrade/ftl_sb_v5.c: 287:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] base layout blob load 0x48 bytes 00:25:26.035 [2024-12-06 18:23:36.441333] upgrade/ftl_sb_v5.c: 294:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] layout blob load 0x190 bytes 00:25:26.035 [2024-12-06 18:23:36.441428] upgrade/ftl_sb_v5.c: 92:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob store 0x150 bytes 00:25:26.035 [2024-12-06 18:23:36.441442] upgrade/ftl_sb_v5.c: 101:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] base layout blob store 0x48 bytes 00:25:26.035 [2024-12-06 18:23:36.441466] upgrade/ftl_sb_v5.c: 109:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] layout blob store 0x190 bytes 00:25:26.035 [2024-12-06 18:23:36.441479] ftl_layout.c: 685:ftl_layout_setup: *NOTICE*: [FTL][ftl0] Base device capacity: 103424.00 MiB 00:25:26.035 [2024-12-06 18:23:36.441496] ftl_layout.c: 687:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache device capacity: 5171.00 MiB 00:25:26.035 [2024-12-06 18:23:36.441508] ftl_layout.c: 689:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P entries: 23592960 00:25:26.035 [2024-12-06 18:23:36.441523] ftl_layout.c: 690:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P address size: 4 00:25:26.035 [2024-12-06 18:23:36.441533] ftl_layout.c: 691:ftl_layout_setup: *NOTICE*: [FTL][ftl0] P2L checkpoint pages: 2048 00:25:26.035 [2024-12-06 18:23:36.441552] ftl_layout.c: 692:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache chunk count 5 00:25:26.035 [2024-12-06 18:23:36.441563] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:26.035 [2024-12-06 18:23:36.441578] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize layout 00:25:26.035 [2024-12-06 18:23:36.441588] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.358 ms 00:25:26.035 [2024-12-06 18:23:36.441603] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:26.035 [2024-12-06 18:23:36.441684] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:26.035 [2024-12-06 18:23:36.441699] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Verify layout 00:25:26.035 [2024-12-06 18:23:36.441710] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.054 ms 00:25:26.035 [2024-12-06 18:23:36.441725] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:26.035 [2024-12-06 18:23:36.441814] ftl_layout.c: 768:ftl_layout_dump: *NOTICE*: [FTL][ftl0] NV cache layout: 00:25:26.035 [2024-12-06 18:23:36.441831] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb 00:25:26.035 [2024-12-06 18:23:36.441842] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:25:26.035 [2024-12-06 18:23:36.441858] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:25:26.035 [2024-12-06 18:23:36.441868] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region l2p 00:25:26.035 [2024-12-06 18:23:36.441884] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.12 MiB 00:25:26.035 [2024-12-06 18:23:36.441894] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 90.00 MiB 00:25:26.035 [2024-12-06 18:23:36.441914] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md 00:25:26.035 [2024-12-06 18:23:36.441924] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 90.12 MiB 00:25:26.035 [2024-12-06 18:23:36.441938] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:25:26.035 [2024-12-06 18:23:36.441947] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md_mirror 00:25:26.035 [2024-12-06 18:23:36.441962] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 90.62 MiB 00:25:26.035 [2024-12-06 18:23:36.441972] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:25:26.035 [2024-12-06 18:23:36.441986] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md 00:25:26.035 [2024-12-06 18:23:36.441996] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.88 MiB 00:25:26.035 [2024-12-06 18:23:36.442011] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:25:26.035 [2024-12-06 18:23:36.442020] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md_mirror 00:25:26.035 [2024-12-06 18:23:36.442034] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 124.00 MiB 00:25:26.035 [2024-12-06 18:23:36.442055] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:25:26.035 [2024-12-06 18:23:36.442070] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l0 00:25:26.035 [2024-12-06 18:23:36.442079] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 91.12 MiB 00:25:26.035 [2024-12-06 18:23:36.442093] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:25:26.035 [2024-12-06 18:23:36.442103] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l1 00:25:26.035 [2024-12-06 18:23:36.442121] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 99.12 MiB 00:25:26.035 [2024-12-06 18:23:36.442131] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:25:26.035 [2024-12-06 18:23:36.442150] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l2 00:25:26.035 [2024-12-06 18:23:36.442160] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 107.12 MiB 00:25:26.035 [2024-12-06 18:23:36.442174] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:25:26.035 [2024-12-06 18:23:36.442183] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l3 00:25:26.035 [2024-12-06 18:23:36.442199] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 115.12 MiB 00:25:26.035 [2024-12-06 18:23:36.442208] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:25:26.035 [2024-12-06 18:23:36.442222] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md 00:25:26.035 [2024-12-06 18:23:36.442232] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.12 MiB 00:25:26.035 [2024-12-06 18:23:36.442246] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:25:26.035 [2024-12-06 18:23:36.442255] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md_mirror 00:25:26.035 [2024-12-06 18:23:36.442280] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.38 MiB 00:25:26.035 [2024-12-06 18:23:36.442290] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:25:26.035 [2024-12-06 18:23:36.442304] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log 00:25:26.035 [2024-12-06 18:23:36.442314] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.62 MiB 00:25:26.035 [2024-12-06 18:23:36.442332] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:25:26.035 [2024-12-06 18:23:36.442342] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log_mirror 00:25:26.035 [2024-12-06 18:23:36.442356] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.75 MiB 00:25:26.035 [2024-12-06 18:23:36.442365] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:25:26.035 [2024-12-06 18:23:36.442379] ftl_layout.c: 775:ftl_layout_dump: *NOTICE*: [FTL][ftl0] Base device layout: 00:25:26.035 [2024-12-06 18:23:36.442402] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb_mirror 00:25:26.035 [2024-12-06 18:23:36.442416] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:25:26.035 [2024-12-06 18:23:36.442427] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:25:26.035 [2024-12-06 18:23:36.442442] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region vmap 00:25:26.035 [2024-12-06 18:23:36.442452] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 102400.25 MiB 00:25:26.035 [2024-12-06 18:23:36.442466] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 3.38 MiB 00:25:26.035 [2024-12-06 18:23:36.442476] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region data_btm 00:25:26.035 [2024-12-06 18:23:36.442490] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.25 MiB 00:25:26.035 [2024-12-06 18:23:36.442500] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 102400.00 MiB 00:25:26.035 [2024-12-06 18:23:36.442513] upgrade/ftl_sb_v5.c: 408:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - nvc: 00:25:26.035 [2024-12-06 18:23:36.442525] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x0 ver:5 blk_offs:0x0 blk_sz:0x20 00:25:26.035 [2024-12-06 18:23:36.442543] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x2 ver:0 blk_offs:0x20 blk_sz:0x5a00 00:25:26.035 [2024-12-06 18:23:36.442554] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x3 ver:2 blk_offs:0x5a20 blk_sz:0x80 00:25:26.035 [2024-12-06 18:23:36.442567] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x4 ver:2 blk_offs:0x5aa0 blk_sz:0x80 00:25:26.035 [2024-12-06 18:23:36.442577] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xa ver:2 blk_offs:0x5b20 blk_sz:0x800 00:25:26.035 [2024-12-06 18:23:36.442590] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xb ver:2 blk_offs:0x6320 blk_sz:0x800 00:25:26.035 [2024-12-06 18:23:36.442600] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xc ver:2 blk_offs:0x6b20 blk_sz:0x800 00:25:26.035 [2024-12-06 18:23:36.442612] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xd ver:2 blk_offs:0x7320 blk_sz:0x800 00:25:26.035 [2024-12-06 18:23:36.442622] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xe ver:0 blk_offs:0x7b20 blk_sz:0x40 00:25:26.035 [2024-12-06 18:23:36.442635] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xf ver:0 blk_offs:0x7b60 blk_sz:0x40 00:25:26.035 [2024-12-06 18:23:36.442645] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x10 ver:1 blk_offs:0x7ba0 blk_sz:0x20 00:25:26.035 [2024-12-06 18:23:36.442658] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x11 ver:1 blk_offs:0x7bc0 blk_sz:0x20 00:25:26.035 [2024-12-06 18:23:36.442668] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x6 ver:2 blk_offs:0x7be0 blk_sz:0x20 00:25:26.035 [2024-12-06 18:23:36.442680] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x7 ver:2 blk_offs:0x7c00 blk_sz:0x20 00:25:26.035 [2024-12-06 18:23:36.442691] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x7c20 blk_sz:0x13b6e0 00:25:26.035 [2024-12-06 18:23:36.442703] upgrade/ftl_sb_v5.c: 422:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - base dev: 00:25:26.035 [2024-12-06 18:23:36.442715] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x1 ver:5 blk_offs:0x0 blk_sz:0x20 00:25:26.035 [2024-12-06 18:23:36.442730] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x20 blk_sz:0x20 00:25:26.035 [2024-12-06 18:23:36.442741] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x9 ver:0 blk_offs:0x40 blk_sz:0x1900000 00:25:26.035 [2024-12-06 18:23:36.442753] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x5 ver:0 blk_offs:0x1900040 blk_sz:0x360 00:25:26.035 [2024-12-06 18:23:36.442764] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x19003a0 blk_sz:0x3fc60 00:25:26.035 [2024-12-06 18:23:36.442777] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:26.035 [2024-12-06 18:23:36.442787] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Layout upgrade 00:25:26.035 [2024-12-06 18:23:36.442799] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.017 ms 00:25:26.035 [2024-12-06 18:23:36.442813] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:26.035 [2024-12-06 18:23:36.483250] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:26.035 [2024-12-06 18:23:36.483302] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:25:26.036 [2024-12-06 18:23:36.483339] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 40.437 ms 00:25:26.036 [2024-12-06 18:23:36.483355] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:26.036 [2024-12-06 18:23:36.483503] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:26.036 [2024-12-06 18:23:36.483516] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize band addresses 00:25:26.036 [2024-12-06 18:23:36.483531] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.063 ms 00:25:26.036 [2024-12-06 18:23:36.483541] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:26.036 [2024-12-06 18:23:36.531807] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:26.036 [2024-12-06 18:23:36.531858] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:25:26.036 [2024-12-06 18:23:36.531877] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 48.312 ms 00:25:26.036 [2024-12-06 18:23:36.531889] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:26.036 [2024-12-06 18:23:36.531995] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:26.036 [2024-12-06 18:23:36.532008] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:25:26.036 [2024-12-06 18:23:36.532024] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.003 ms 00:25:26.036 [2024-12-06 18:23:36.532035] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:26.036 [2024-12-06 18:23:36.532494] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:26.036 [2024-12-06 18:23:36.532515] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:25:26.036 [2024-12-06 18:23:36.532531] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.433 ms 00:25:26.036 [2024-12-06 18:23:36.532541] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:26.036 [2024-12-06 18:23:36.532672] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:26.036 [2024-12-06 18:23:36.532685] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:25:26.036 [2024-12-06 18:23:36.532700] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.103 ms 00:25:26.036 [2024-12-06 18:23:36.532711] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:26.036 [2024-12-06 18:23:36.555779] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:26.036 [2024-12-06 18:23:36.555965] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:25:26.036 [2024-12-06 18:23:36.555994] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 23.079 ms 00:25:26.036 [2024-12-06 18:23:36.556005] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:26.036 [2024-12-06 18:23:36.588635] ftl_nv_cache.c:1772:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: full chunks = 2, empty chunks = 2 00:25:26.036 [2024-12-06 18:23:36.588678] ftl_nv_cache.c:1776:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: state loaded successfully 00:25:26.036 [2024-12-06 18:23:36.588699] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:26.036 [2024-12-06 18:23:36.588710] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore NV cache metadata 00:25:26.036 [2024-12-06 18:23:36.588727] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 32.618 ms 00:25:26.036 [2024-12-06 18:23:36.588750] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:26.294 [2024-12-06 18:23:36.618141] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:26.294 [2024-12-06 18:23:36.618178] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore valid map metadata 00:25:26.294 [2024-12-06 18:23:36.618197] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 29.349 ms 00:25:26.294 [2024-12-06 18:23:36.618224] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:26.294 [2024-12-06 18:23:36.636680] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:26.294 [2024-12-06 18:23:36.636728] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore band info metadata 00:25:26.294 [2024-12-06 18:23:36.636750] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 18.386 ms 00:25:26.294 [2024-12-06 18:23:36.636775] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:26.294 [2024-12-06 18:23:36.654947] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:26.294 [2024-12-06 18:23:36.655090] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore trim metadata 00:25:26.294 [2024-12-06 18:23:36.655118] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 18.106 ms 00:25:26.294 [2024-12-06 18:23:36.655129] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:26.294 [2024-12-06 18:23:36.656001] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:26.294 [2024-12-06 18:23:36.656029] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize P2L checkpointing 00:25:26.294 [2024-12-06 18:23:36.656047] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.708 ms 00:25:26.294 [2024-12-06 18:23:36.656057] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:26.294 [2024-12-06 18:23:36.742671] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:26.294 [2024-12-06 18:23:36.742740] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore P2L checkpoints 00:25:26.294 [2024-12-06 18:23:36.742764] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 86.720 ms 00:25:26.294 [2024-12-06 18:23:36.742776] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:26.294 [2024-12-06 18:23:36.753587] ftl_l2p_cache.c: 458:ftl_l2p_cache_init: *NOTICE*: l2p maximum resident size is: 59 (of 60) MiB 00:25:26.294 [2024-12-06 18:23:36.769684] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:26.294 [2024-12-06 18:23:36.769770] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize L2P 00:25:26.294 [2024-12-06 18:23:36.769792] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 26.821 ms 00:25:26.295 [2024-12-06 18:23:36.769808] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:26.295 [2024-12-06 18:23:36.769916] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:26.295 [2024-12-06 18:23:36.769935] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore L2P 00:25:26.295 [2024-12-06 18:23:36.769948] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.010 ms 00:25:26.295 [2024-12-06 18:23:36.769963] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:26.295 [2024-12-06 18:23:36.770016] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:26.295 [2024-12-06 18:23:36.770032] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize band initialization 00:25:26.295 [2024-12-06 18:23:36.770043] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.033 ms 00:25:26.295 [2024-12-06 18:23:36.770064] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:26.295 [2024-12-06 18:23:36.770088] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:26.295 [2024-12-06 18:23:36.770103] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Start core poller 00:25:26.295 [2024-12-06 18:23:36.770113] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.005 ms 00:25:26.295 [2024-12-06 18:23:36.770128] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:26.295 [2024-12-06 18:23:36.770170] mngt/ftl_mngt_self_test.c: 208:ftl_mngt_self_test: *NOTICE*: [FTL][ftl0] Self test skipped 00:25:26.295 [2024-12-06 18:23:36.770193] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:26.295 [2024-12-06 18:23:36.770211] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Self test on startup 00:25:26.295 [2024-12-06 18:23:36.770225] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.014 ms 00:25:26.295 [2024-12-06 18:23:36.770235] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:26.295 [2024-12-06 18:23:36.807122] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:26.295 [2024-12-06 18:23:36.807168] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL dirty state 00:25:26.295 [2024-12-06 18:23:36.807188] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 36.908 ms 00:25:26.295 [2024-12-06 18:23:36.807215] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:26.295 [2024-12-06 18:23:36.807349] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:26.295 [2024-12-06 18:23:36.807365] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize initialization 00:25:26.295 [2024-12-06 18:23:36.807395] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.034 ms 00:25:26.295 [2024-12-06 18:23:36.807412] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:26.295 [2024-12-06 18:23:36.808500] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:25:26.295 [2024-12-06 18:23:36.813003] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL startup', duration = 406.392 ms, result 0 00:25:26.295 [2024-12-06 18:23:36.814245] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:25:26.295 Some configs were skipped because the RPC state that can call them passed over. 00:25:26.295 18:23:36 ftl.ftl_trim -- ftl/trim.sh@99 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_ftl_unmap -b ftl0 --lba 0 --num_blocks 1024 00:25:26.553 [2024-12-06 18:23:37.070513] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:26.553 [2024-12-06 18:23:37.070714] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Process trim 00:25:26.553 [2024-12-06 18:23:37.070806] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.651 ms 00:25:26.553 [2024-12-06 18:23:37.070852] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:26.553 [2024-12-06 18:23:37.070938] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL trim', duration = 2.065 ms, result 0 00:25:26.553 true 00:25:26.553 18:23:37 ftl.ftl_trim -- ftl/trim.sh@100 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_ftl_unmap -b ftl0 --lba 23591936 --num_blocks 1024 00:25:26.811 [2024-12-06 18:23:37.277840] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:26.811 [2024-12-06 18:23:37.277892] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Process trim 00:25:26.811 [2024-12-06 18:23:37.277911] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.093 ms 00:25:26.811 [2024-12-06 18:23:37.277922] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:26.811 [2024-12-06 18:23:37.277964] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL trim', duration = 1.223 ms, result 0 00:25:26.811 true 00:25:26.811 18:23:37 ftl.ftl_trim -- ftl/trim.sh@102 -- # killprocess 78733 00:25:26.811 18:23:37 ftl.ftl_trim -- common/autotest_common.sh@954 -- # '[' -z 78733 ']' 00:25:26.811 18:23:37 ftl.ftl_trim -- common/autotest_common.sh@958 -- # kill -0 78733 00:25:26.811 18:23:37 ftl.ftl_trim -- common/autotest_common.sh@959 -- # uname 00:25:26.811 18:23:37 ftl.ftl_trim -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:25:26.811 18:23:37 ftl.ftl_trim -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 78733 00:25:26.811 18:23:37 ftl.ftl_trim -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:25:26.811 18:23:37 ftl.ftl_trim -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:25:26.811 18:23:37 ftl.ftl_trim -- common/autotest_common.sh@972 -- # echo 'killing process with pid 78733' 00:25:26.811 killing process with pid 78733 00:25:26.811 18:23:37 ftl.ftl_trim -- common/autotest_common.sh@973 -- # kill 78733 00:25:26.811 18:23:37 ftl.ftl_trim -- common/autotest_common.sh@978 -- # wait 78733 00:25:28.214 [2024-12-06 18:23:38.471902] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:28.214 [2024-12-06 18:23:38.472214] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinit core IO channel 00:25:28.214 [2024-12-06 18:23:38.472343] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.004 ms 00:25:28.214 [2024-12-06 18:23:38.472371] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:28.214 [2024-12-06 18:23:38.472424] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on ftl_core_thread 00:25:28.214 [2024-12-06 18:23:38.476294] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:28.214 [2024-12-06 18:23:38.476335] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Unregister IO device 00:25:28.214 [2024-12-06 18:23:38.476360] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.846 ms 00:25:28.214 [2024-12-06 18:23:38.476376] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:28.214 [2024-12-06 18:23:38.476711] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:28.214 [2024-12-06 18:23:38.476741] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Stop core poller 00:25:28.214 [2024-12-06 18:23:38.476762] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.278 ms 00:25:28.214 [2024-12-06 18:23:38.476778] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:28.214 [2024-12-06 18:23:38.480317] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:28.214 [2024-12-06 18:23:38.480360] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist L2P 00:25:28.214 [2024-12-06 18:23:38.480385] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.511 ms 00:25:28.214 [2024-12-06 18:23:38.480402] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:28.214 [2024-12-06 18:23:38.487559] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:28.214 [2024-12-06 18:23:38.487609] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finish L2P trims 00:25:28.214 [2024-12-06 18:23:38.487635] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 7.102 ms 00:25:28.215 [2024-12-06 18:23:38.487651] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:28.215 [2024-12-06 18:23:38.502838] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:28.215 [2024-12-06 18:23:38.503065] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist NV cache metadata 00:25:28.215 [2024-12-06 18:23:38.503113] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 15.129 ms 00:25:28.215 [2024-12-06 18:23:38.503134] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:28.215 [2024-12-06 18:23:38.512402] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:28.215 [2024-12-06 18:23:38.512447] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist valid map metadata 00:25:28.215 [2024-12-06 18:23:38.512464] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 9.170 ms 00:25:28.215 [2024-12-06 18:23:38.512491] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:28.215 [2024-12-06 18:23:38.512626] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:28.215 [2024-12-06 18:23:38.512640] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist P2L metadata 00:25:28.215 [2024-12-06 18:23:38.512654] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.082 ms 00:25:28.215 [2024-12-06 18:23:38.512665] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:28.215 [2024-12-06 18:23:38.527559] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:28.215 [2024-12-06 18:23:38.527593] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist band info metadata 00:25:28.215 [2024-12-06 18:23:38.527612] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 14.888 ms 00:25:28.215 [2024-12-06 18:23:38.527639] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:28.215 [2024-12-06 18:23:38.542085] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:28.215 [2024-12-06 18:23:38.542118] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist trim metadata 00:25:28.215 [2024-12-06 18:23:38.542158] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 14.395 ms 00:25:28.215 [2024-12-06 18:23:38.542168] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:28.215 [2024-12-06 18:23:38.556342] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:28.215 [2024-12-06 18:23:38.556499] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist superblock 00:25:28.215 [2024-12-06 18:23:38.556529] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 14.138 ms 00:25:28.215 [2024-12-06 18:23:38.556539] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:28.215 [2024-12-06 18:23:38.571213] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:28.215 [2024-12-06 18:23:38.571248] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL clean state 00:25:28.215 [2024-12-06 18:23:38.571389] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 14.608 ms 00:25:28.215 [2024-12-06 18:23:38.571410] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:28.215 [2024-12-06 18:23:38.571527] ftl_debug.c: 165:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Bands validity: 00:25:28.215 [2024-12-06 18:23:38.571546] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 1: 0 / 261120 wr_cnt: 0 state: free 00:25:28.215 [2024-12-06 18:23:38.571564] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 2: 0 / 261120 wr_cnt: 0 state: free 00:25:28.215 [2024-12-06 18:23:38.571576] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 3: 0 / 261120 wr_cnt: 0 state: free 00:25:28.215 [2024-12-06 18:23:38.571592] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 4: 0 / 261120 wr_cnt: 0 state: free 00:25:28.215 [2024-12-06 18:23:38.571603] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 5: 0 / 261120 wr_cnt: 0 state: free 00:25:28.215 [2024-12-06 18:23:38.571624] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 6: 0 / 261120 wr_cnt: 0 state: free 00:25:28.215 [2024-12-06 18:23:38.571635] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 7: 0 / 261120 wr_cnt: 0 state: free 00:25:28.215 [2024-12-06 18:23:38.571650] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 8: 0 / 261120 wr_cnt: 0 state: free 00:25:28.215 [2024-12-06 18:23:38.571661] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 9: 0 / 261120 wr_cnt: 0 state: free 00:25:28.215 [2024-12-06 18:23:38.571677] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 10: 0 / 261120 wr_cnt: 0 state: free 00:25:28.215 [2024-12-06 18:23:38.571688] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 11: 0 / 261120 wr_cnt: 0 state: free 00:25:28.215 [2024-12-06 18:23:38.571704] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 12: 0 / 261120 wr_cnt: 0 state: free 00:25:28.215 [2024-12-06 18:23:38.571715] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 13: 0 / 261120 wr_cnt: 0 state: free 00:25:28.215 [2024-12-06 18:23:38.571731] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 14: 0 / 261120 wr_cnt: 0 state: free 00:25:28.215 [2024-12-06 18:23:38.571741] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 15: 0 / 261120 wr_cnt: 0 state: free 00:25:28.215 [2024-12-06 18:23:38.571759] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 16: 0 / 261120 wr_cnt: 0 state: free 00:25:28.215 [2024-12-06 18:23:38.571771] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 17: 0 / 261120 wr_cnt: 0 state: free 00:25:28.215 [2024-12-06 18:23:38.571786] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 18: 0 / 261120 wr_cnt: 0 state: free 00:25:28.215 [2024-12-06 18:23:38.571797] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 19: 0 / 261120 wr_cnt: 0 state: free 00:25:28.215 [2024-12-06 18:23:38.571813] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 20: 0 / 261120 wr_cnt: 0 state: free 00:25:28.215 [2024-12-06 18:23:38.571824] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 21: 0 / 261120 wr_cnt: 0 state: free 00:25:28.215 [2024-12-06 18:23:38.571845] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 22: 0 / 261120 wr_cnt: 0 state: free 00:25:28.215 [2024-12-06 18:23:38.571857] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 23: 0 / 261120 wr_cnt: 0 state: free 00:25:28.215 [2024-12-06 18:23:38.571872] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 24: 0 / 261120 wr_cnt: 0 state: free 00:25:28.215 [2024-12-06 18:23:38.571883] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 25: 0 / 261120 wr_cnt: 0 state: free 00:25:28.215 [2024-12-06 18:23:38.571899] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 26: 0 / 261120 wr_cnt: 0 state: free 00:25:28.215 [2024-12-06 18:23:38.571910] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 27: 0 / 261120 wr_cnt: 0 state: free 00:25:28.215 [2024-12-06 18:23:38.571926] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 28: 0 / 261120 wr_cnt: 0 state: free 00:25:28.215 [2024-12-06 18:23:38.571938] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 29: 0 / 261120 wr_cnt: 0 state: free 00:25:28.215 [2024-12-06 18:23:38.571953] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 30: 0 / 261120 wr_cnt: 0 state: free 00:25:28.215 [2024-12-06 18:23:38.571964] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 31: 0 / 261120 wr_cnt: 0 state: free 00:25:28.215 [2024-12-06 18:23:38.571980] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 32: 0 / 261120 wr_cnt: 0 state: free 00:25:28.215 [2024-12-06 18:23:38.571991] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 33: 0 / 261120 wr_cnt: 0 state: free 00:25:28.215 [2024-12-06 18:23:38.572006] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 34: 0 / 261120 wr_cnt: 0 state: free 00:25:28.215 [2024-12-06 18:23:38.572017] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 35: 0 / 261120 wr_cnt: 0 state: free 00:25:28.215 [2024-12-06 18:23:38.572033] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 36: 0 / 261120 wr_cnt: 0 state: free 00:25:28.215 [2024-12-06 18:23:38.572045] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 37: 0 / 261120 wr_cnt: 0 state: free 00:25:28.215 [2024-12-06 18:23:38.572066] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 38: 0 / 261120 wr_cnt: 0 state: free 00:25:28.215 [2024-12-06 18:23:38.572077] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 39: 0 / 261120 wr_cnt: 0 state: free 00:25:28.215 [2024-12-06 18:23:38.572093] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 40: 0 / 261120 wr_cnt: 0 state: free 00:25:28.215 [2024-12-06 18:23:38.572104] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 41: 0 / 261120 wr_cnt: 0 state: free 00:25:28.215 [2024-12-06 18:23:38.572121] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 42: 0 / 261120 wr_cnt: 0 state: free 00:25:28.215 [2024-12-06 18:23:38.572132] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 43: 0 / 261120 wr_cnt: 0 state: free 00:25:28.215 [2024-12-06 18:23:38.572147] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 44: 0 / 261120 wr_cnt: 0 state: free 00:25:28.215 [2024-12-06 18:23:38.572158] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 45: 0 / 261120 wr_cnt: 0 state: free 00:25:28.215 [2024-12-06 18:23:38.572173] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 46: 0 / 261120 wr_cnt: 0 state: free 00:25:28.216 [2024-12-06 18:23:38.572184] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 47: 0 / 261120 wr_cnt: 0 state: free 00:25:28.216 [2024-12-06 18:23:38.572199] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 48: 0 / 261120 wr_cnt: 0 state: free 00:25:28.216 [2024-12-06 18:23:38.572210] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 49: 0 / 261120 wr_cnt: 0 state: free 00:25:28.216 [2024-12-06 18:23:38.572225] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 50: 0 / 261120 wr_cnt: 0 state: free 00:25:28.216 [2024-12-06 18:23:38.572236] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 51: 0 / 261120 wr_cnt: 0 state: free 00:25:28.216 [2024-12-06 18:23:38.572251] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 52: 0 / 261120 wr_cnt: 0 state: free 00:25:28.216 [2024-12-06 18:23:38.572261] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 53: 0 / 261120 wr_cnt: 0 state: free 00:25:28.216 [2024-12-06 18:23:38.572292] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 54: 0 / 261120 wr_cnt: 0 state: free 00:25:28.216 [2024-12-06 18:23:38.572304] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 55: 0 / 261120 wr_cnt: 0 state: free 00:25:28.216 [2024-12-06 18:23:38.572319] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 56: 0 / 261120 wr_cnt: 0 state: free 00:25:28.216 [2024-12-06 18:23:38.572330] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 57: 0 / 261120 wr_cnt: 0 state: free 00:25:28.216 [2024-12-06 18:23:38.572345] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 58: 0 / 261120 wr_cnt: 0 state: free 00:25:28.216 [2024-12-06 18:23:38.572356] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 59: 0 / 261120 wr_cnt: 0 state: free 00:25:28.216 [2024-12-06 18:23:38.572372] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 60: 0 / 261120 wr_cnt: 0 state: free 00:25:28.216 [2024-12-06 18:23:38.572382] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 61: 0 / 261120 wr_cnt: 0 state: free 00:25:28.216 [2024-12-06 18:23:38.572398] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 62: 0 / 261120 wr_cnt: 0 state: free 00:25:28.216 [2024-12-06 18:23:38.572408] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 63: 0 / 261120 wr_cnt: 0 state: free 00:25:28.216 [2024-12-06 18:23:38.572424] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 64: 0 / 261120 wr_cnt: 0 state: free 00:25:28.216 [2024-12-06 18:23:38.572434] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 65: 0 / 261120 wr_cnt: 0 state: free 00:25:28.216 [2024-12-06 18:23:38.572449] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 66: 0 / 261120 wr_cnt: 0 state: free 00:25:28.216 [2024-12-06 18:23:38.572460] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 67: 0 / 261120 wr_cnt: 0 state: free 00:25:28.216 [2024-12-06 18:23:38.572477] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 68: 0 / 261120 wr_cnt: 0 state: free 00:25:28.216 [2024-12-06 18:23:38.572488] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 69: 0 / 261120 wr_cnt: 0 state: free 00:25:28.216 [2024-12-06 18:23:38.572509] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 70: 0 / 261120 wr_cnt: 0 state: free 00:25:28.216 [2024-12-06 18:23:38.572520] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 71: 0 / 261120 wr_cnt: 0 state: free 00:25:28.216 [2024-12-06 18:23:38.572536] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 72: 0 / 261120 wr_cnt: 0 state: free 00:25:28.216 [2024-12-06 18:23:38.572547] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 73: 0 / 261120 wr_cnt: 0 state: free 00:25:28.216 [2024-12-06 18:23:38.572563] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 74: 0 / 261120 wr_cnt: 0 state: free 00:25:28.216 [2024-12-06 18:23:38.572574] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 75: 0 / 261120 wr_cnt: 0 state: free 00:25:28.216 [2024-12-06 18:23:38.572590] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 76: 0 / 261120 wr_cnt: 0 state: free 00:25:28.216 [2024-12-06 18:23:38.572601] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 77: 0 / 261120 wr_cnt: 0 state: free 00:25:28.216 [2024-12-06 18:23:38.572616] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 78: 0 / 261120 wr_cnt: 0 state: free 00:25:28.216 [2024-12-06 18:23:38.572627] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 79: 0 / 261120 wr_cnt: 0 state: free 00:25:28.216 [2024-12-06 18:23:38.572642] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 80: 0 / 261120 wr_cnt: 0 state: free 00:25:28.216 [2024-12-06 18:23:38.572653] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 81: 0 / 261120 wr_cnt: 0 state: free 00:25:28.216 [2024-12-06 18:23:38.572669] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 82: 0 / 261120 wr_cnt: 0 state: free 00:25:28.216 [2024-12-06 18:23:38.572680] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 83: 0 / 261120 wr_cnt: 0 state: free 00:25:28.216 [2024-12-06 18:23:38.572695] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 84: 0 / 261120 wr_cnt: 0 state: free 00:25:28.216 [2024-12-06 18:23:38.572707] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 85: 0 / 261120 wr_cnt: 0 state: free 00:25:28.216 [2024-12-06 18:23:38.572726] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 86: 0 / 261120 wr_cnt: 0 state: free 00:25:28.216 [2024-12-06 18:23:38.572737] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 87: 0 / 261120 wr_cnt: 0 state: free 00:25:28.216 [2024-12-06 18:23:38.572753] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 88: 0 / 261120 wr_cnt: 0 state: free 00:25:28.216 [2024-12-06 18:23:38.572764] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 89: 0 / 261120 wr_cnt: 0 state: free 00:25:28.216 [2024-12-06 18:23:38.572779] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 90: 0 / 261120 wr_cnt: 0 state: free 00:25:28.216 [2024-12-06 18:23:38.572789] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 91: 0 / 261120 wr_cnt: 0 state: free 00:25:28.216 [2024-12-06 18:23:38.572805] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 92: 0 / 261120 wr_cnt: 0 state: free 00:25:28.216 [2024-12-06 18:23:38.572816] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 93: 0 / 261120 wr_cnt: 0 state: free 00:25:28.216 [2024-12-06 18:23:38.572831] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 94: 0 / 261120 wr_cnt: 0 state: free 00:25:28.216 [2024-12-06 18:23:38.572841] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 95: 0 / 261120 wr_cnt: 0 state: free 00:25:28.216 [2024-12-06 18:23:38.572858] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 96: 0 / 261120 wr_cnt: 0 state: free 00:25:28.216 [2024-12-06 18:23:38.572869] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 97: 0 / 261120 wr_cnt: 0 state: free 00:25:28.216 [2024-12-06 18:23:38.572884] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 98: 0 / 261120 wr_cnt: 0 state: free 00:25:28.216 [2024-12-06 18:23:38.572896] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 99: 0 / 261120 wr_cnt: 0 state: free 00:25:28.216 [2024-12-06 18:23:38.572912] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 100: 0 / 261120 wr_cnt: 0 state: free 00:25:28.216 [2024-12-06 18:23:38.572942] ftl_debug.c: 211:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] 00:25:28.216 [2024-12-06 18:23:38.572968] ftl_debug.c: 212:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] device UUID: 0182d045-795a-443b-ad13-478c5d3e8b79 00:25:28.216 [2024-12-06 18:23:38.572985] ftl_debug.c: 213:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total valid LBAs: 0 00:25:28.216 [2024-12-06 18:23:38.573000] ftl_debug.c: 214:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total writes: 960 00:25:28.216 [2024-12-06 18:23:38.573009] ftl_debug.c: 215:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] user writes: 0 00:25:28.216 [2024-12-06 18:23:38.573024] ftl_debug.c: 216:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] WAF: inf 00:25:28.216 [2024-12-06 18:23:38.573034] ftl_debug.c: 218:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] limits: 00:25:28.216 [2024-12-06 18:23:38.573049] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] crit: 0 00:25:28.216 [2024-12-06 18:23:38.573059] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] high: 0 00:25:28.216 [2024-12-06 18:23:38.573073] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] low: 0 00:25:28.216 [2024-12-06 18:23:38.573082] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] start: 0 00:25:28.216 [2024-12-06 18:23:38.573097] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:28.216 [2024-12-06 18:23:38.573107] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Dump statistics 00:25:28.216 [2024-12-06 18:23:38.573123] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.578 ms 00:25:28.216 [2024-12-06 18:23:38.573134] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:28.216 [2024-12-06 18:23:38.592965] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:28.216 [2024-12-06 18:23:38.592998] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize L2P 00:25:28.217 [2024-12-06 18:23:38.593021] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 19.826 ms 00:25:28.217 [2024-12-06 18:23:38.593031] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:28.217 [2024-12-06 18:23:38.593642] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:28.217 [2024-12-06 18:23:38.593661] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize P2L checkpointing 00:25:28.217 [2024-12-06 18:23:38.593683] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.557 ms 00:25:28.217 [2024-12-06 18:23:38.593693] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:28.217 [2024-12-06 18:23:38.663595] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:25:28.217 [2024-12-06 18:23:38.663744] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:25:28.217 [2024-12-06 18:23:38.663774] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:25:28.217 [2024-12-06 18:23:38.663785] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:28.217 [2024-12-06 18:23:38.663877] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:25:28.217 [2024-12-06 18:23:38.663890] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:25:28.217 [2024-12-06 18:23:38.663912] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:25:28.217 [2024-12-06 18:23:38.663923] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:28.217 [2024-12-06 18:23:38.663980] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:25:28.217 [2024-12-06 18:23:38.663993] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:25:28.217 [2024-12-06 18:23:38.664013] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:25:28.217 [2024-12-06 18:23:38.664023] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:28.217 [2024-12-06 18:23:38.664047] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:25:28.217 [2024-12-06 18:23:38.664058] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:25:28.217 [2024-12-06 18:23:38.664072] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:25:28.217 [2024-12-06 18:23:38.664088] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:28.476 [2024-12-06 18:23:38.788913] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:25:28.476 [2024-12-06 18:23:38.788983] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:25:28.476 [2024-12-06 18:23:38.789003] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:25:28.476 [2024-12-06 18:23:38.789014] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:28.476 [2024-12-06 18:23:38.890927] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:25:28.476 [2024-12-06 18:23:38.890989] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:25:28.476 [2024-12-06 18:23:38.891026] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:25:28.476 [2024-12-06 18:23:38.891043] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:28.476 [2024-12-06 18:23:38.891156] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:25:28.476 [2024-12-06 18:23:38.891169] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:25:28.476 [2024-12-06 18:23:38.891189] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:25:28.476 [2024-12-06 18:23:38.891200] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:28.476 [2024-12-06 18:23:38.891235] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:25:28.476 [2024-12-06 18:23:38.891246] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:25:28.476 [2024-12-06 18:23:38.891261] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:25:28.476 [2024-12-06 18:23:38.891297] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:28.476 [2024-12-06 18:23:38.891428] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:25:28.476 [2024-12-06 18:23:38.891442] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:25:28.476 [2024-12-06 18:23:38.891458] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:25:28.476 [2024-12-06 18:23:38.891468] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:28.476 [2024-12-06 18:23:38.891516] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:25:28.476 [2024-12-06 18:23:38.891529] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize superblock 00:25:28.476 [2024-12-06 18:23:38.891544] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:25:28.476 [2024-12-06 18:23:38.891555] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:28.476 [2024-12-06 18:23:38.891604] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:25:28.476 [2024-12-06 18:23:38.891616] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:25:28.476 [2024-12-06 18:23:38.891635] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:25:28.476 [2024-12-06 18:23:38.891646] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:28.476 [2024-12-06 18:23:38.891695] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:25:28.476 [2024-12-06 18:23:38.891707] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:25:28.476 [2024-12-06 18:23:38.891722] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:25:28.476 [2024-12-06 18:23:38.891733] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:28.476 [2024-12-06 18:23:38.891883] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL shutdown', duration = 420.637 ms, result 0 00:25:29.411 18:23:39 ftl.ftl_trim -- ftl/trim.sh@105 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=ftl0 --of=/home/vagrant/spdk_repo/spdk/test/ftl/data --count=65536 --json=/home/vagrant/spdk_repo/spdk/test/ftl/config/ftl.json 00:25:29.670 [2024-12-06 18:23:40.009888] Starting SPDK v25.01-pre git sha1 1148849d6 / DPDK 24.03.0 initialization... 00:25:29.670 [2024-12-06 18:23:40.010010] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid78797 ] 00:25:29.670 [2024-12-06 18:23:40.192333] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:25:29.929 [2024-12-06 18:23:40.306272] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:25:30.187 [2024-12-06 18:23:40.668332] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:25:30.187 [2024-12-06 18:23:40.668403] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:25:30.448 [2024-12-06 18:23:40.830106] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:30.448 [2024-12-06 18:23:40.830166] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Check configuration 00:25:30.448 [2024-12-06 18:23:40.830182] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.005 ms 00:25:30.448 [2024-12-06 18:23:40.830193] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:30.448 [2024-12-06 18:23:40.833389] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:30.448 [2024-12-06 18:23:40.833426] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:25:30.448 [2024-12-06 18:23:40.833450] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.180 ms 00:25:30.448 [2024-12-06 18:23:40.833460] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:30.448 [2024-12-06 18:23:40.833586] mngt/ftl_mngt_bdev.c: 196:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using nvc0n1p0 as write buffer cache 00:25:30.448 [2024-12-06 18:23:40.834581] mngt/ftl_mngt_bdev.c: 236:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using bdev as NV Cache device 00:25:30.448 [2024-12-06 18:23:40.834616] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:30.448 [2024-12-06 18:23:40.834628] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:25:30.448 [2024-12-06 18:23:40.834639] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.039 ms 00:25:30.448 [2024-12-06 18:23:40.834648] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:30.448 [2024-12-06 18:23:40.836111] mngt/ftl_mngt_md.c: 455:ftl_mngt_load_sb: *NOTICE*: [FTL][ftl0] SHM: clean 0, shm_clean 0 00:25:30.448 [2024-12-06 18:23:40.855094] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:30.448 [2024-12-06 18:23:40.855133] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Load super block 00:25:30.448 [2024-12-06 18:23:40.855147] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 19.014 ms 00:25:30.448 [2024-12-06 18:23:40.855158] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:30.448 [2024-12-06 18:23:40.855259] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:30.448 [2024-12-06 18:23:40.855315] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Validate super block 00:25:30.448 [2024-12-06 18:23:40.855327] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.024 ms 00:25:30.449 [2024-12-06 18:23:40.855352] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:30.449 [2024-12-06 18:23:40.862006] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:30.449 [2024-12-06 18:23:40.862034] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:25:30.449 [2024-12-06 18:23:40.862045] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 6.620 ms 00:25:30.449 [2024-12-06 18:23:40.862055] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:30.449 [2024-12-06 18:23:40.862166] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:30.449 [2024-12-06 18:23:40.862180] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:25:30.449 [2024-12-06 18:23:40.862191] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.059 ms 00:25:30.449 [2024-12-06 18:23:40.862202] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:30.449 [2024-12-06 18:23:40.862233] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:30.449 [2024-12-06 18:23:40.862244] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Register IO device 00:25:30.449 [2024-12-06 18:23:40.862255] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.006 ms 00:25:30.449 [2024-12-06 18:23:40.862265] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:30.449 [2024-12-06 18:23:40.862302] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on ftl_core_thread 00:25:30.449 [2024-12-06 18:23:40.867060] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:30.449 [2024-12-06 18:23:40.867198] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:25:30.449 [2024-12-06 18:23:40.867293] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.769 ms 00:25:30.449 [2024-12-06 18:23:40.867332] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:30.449 [2024-12-06 18:23:40.867431] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:30.449 [2024-12-06 18:23:40.867575] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Decorate bands 00:25:30.449 [2024-12-06 18:23:40.867668] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.010 ms 00:25:30.449 [2024-12-06 18:23:40.867699] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:30.449 [2024-12-06 18:23:40.867749] ftl_layout.c: 613:ftl_layout_setup: *NOTICE*: [FTL][ftl0] FTL layout setup mode 0 00:25:30.449 [2024-12-06 18:23:40.867792] upgrade/ftl_sb_v5.c: 278:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob load 0x150 bytes 00:25:30.449 [2024-12-06 18:23:40.867864] upgrade/ftl_sb_v5.c: 287:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] base layout blob load 0x48 bytes 00:25:30.449 [2024-12-06 18:23:40.868112] upgrade/ftl_sb_v5.c: 294:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] layout blob load 0x190 bytes 00:25:30.449 [2024-12-06 18:23:40.868241] upgrade/ftl_sb_v5.c: 92:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob store 0x150 bytes 00:25:30.449 [2024-12-06 18:23:40.868420] upgrade/ftl_sb_v5.c: 101:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] base layout blob store 0x48 bytes 00:25:30.449 [2024-12-06 18:23:40.868470] upgrade/ftl_sb_v5.c: 109:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] layout blob store 0x190 bytes 00:25:30.449 [2024-12-06 18:23:40.868525] ftl_layout.c: 685:ftl_layout_setup: *NOTICE*: [FTL][ftl0] Base device capacity: 103424.00 MiB 00:25:30.449 [2024-12-06 18:23:40.868575] ftl_layout.c: 687:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache device capacity: 5171.00 MiB 00:25:30.449 [2024-12-06 18:23:40.868685] ftl_layout.c: 689:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P entries: 23592960 00:25:30.449 [2024-12-06 18:23:40.868717] ftl_layout.c: 690:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P address size: 4 00:25:30.449 [2024-12-06 18:23:40.868746] ftl_layout.c: 691:ftl_layout_setup: *NOTICE*: [FTL][ftl0] P2L checkpoint pages: 2048 00:25:30.449 [2024-12-06 18:23:40.868776] ftl_layout.c: 692:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache chunk count 5 00:25:30.449 [2024-12-06 18:23:40.868817] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:30.449 [2024-12-06 18:23:40.868829] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize layout 00:25:30.449 [2024-12-06 18:23:40.868841] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.072 ms 00:25:30.449 [2024-12-06 18:23:40.868851] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:30.449 [2024-12-06 18:23:40.868935] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:30.449 [2024-12-06 18:23:40.868951] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Verify layout 00:25:30.449 [2024-12-06 18:23:40.868962] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.056 ms 00:25:30.449 [2024-12-06 18:23:40.868972] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:30.449 [2024-12-06 18:23:40.869064] ftl_layout.c: 768:ftl_layout_dump: *NOTICE*: [FTL][ftl0] NV cache layout: 00:25:30.449 [2024-12-06 18:23:40.869078] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb 00:25:30.449 [2024-12-06 18:23:40.869088] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:25:30.449 [2024-12-06 18:23:40.869099] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:25:30.449 [2024-12-06 18:23:40.869111] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region l2p 00:25:30.449 [2024-12-06 18:23:40.869120] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.12 MiB 00:25:30.449 [2024-12-06 18:23:40.869130] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 90.00 MiB 00:25:30.449 [2024-12-06 18:23:40.869139] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md 00:25:30.449 [2024-12-06 18:23:40.869148] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 90.12 MiB 00:25:30.449 [2024-12-06 18:23:40.869157] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:25:30.449 [2024-12-06 18:23:40.869167] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md_mirror 00:25:30.449 [2024-12-06 18:23:40.869184] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 90.62 MiB 00:25:30.449 [2024-12-06 18:23:40.869193] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:25:30.449 [2024-12-06 18:23:40.869202] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md 00:25:30.449 [2024-12-06 18:23:40.869212] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.88 MiB 00:25:30.449 [2024-12-06 18:23:40.869221] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:25:30.449 [2024-12-06 18:23:40.869231] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md_mirror 00:25:30.449 [2024-12-06 18:23:40.869240] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 124.00 MiB 00:25:30.449 [2024-12-06 18:23:40.869249] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:25:30.449 [2024-12-06 18:23:40.869259] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l0 00:25:30.449 [2024-12-06 18:23:40.869286] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 91.12 MiB 00:25:30.449 [2024-12-06 18:23:40.869295] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:25:30.449 [2024-12-06 18:23:40.869304] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l1 00:25:30.449 [2024-12-06 18:23:40.869315] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 99.12 MiB 00:25:30.449 [2024-12-06 18:23:40.869324] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:25:30.449 [2024-12-06 18:23:40.869333] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l2 00:25:30.449 [2024-12-06 18:23:40.869343] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 107.12 MiB 00:25:30.449 [2024-12-06 18:23:40.869352] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:25:30.449 [2024-12-06 18:23:40.869361] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l3 00:25:30.449 [2024-12-06 18:23:40.869370] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 115.12 MiB 00:25:30.449 [2024-12-06 18:23:40.869380] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:25:30.449 [2024-12-06 18:23:40.869389] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md 00:25:30.449 [2024-12-06 18:23:40.869398] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.12 MiB 00:25:30.449 [2024-12-06 18:23:40.869407] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:25:30.449 [2024-12-06 18:23:40.869416] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md_mirror 00:25:30.449 [2024-12-06 18:23:40.869426] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.38 MiB 00:25:30.449 [2024-12-06 18:23:40.869435] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:25:30.449 [2024-12-06 18:23:40.869444] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log 00:25:30.449 [2024-12-06 18:23:40.869453] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.62 MiB 00:25:30.449 [2024-12-06 18:23:40.869463] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:25:30.449 [2024-12-06 18:23:40.869472] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log_mirror 00:25:30.449 [2024-12-06 18:23:40.869481] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.75 MiB 00:25:30.449 [2024-12-06 18:23:40.869490] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:25:30.449 [2024-12-06 18:23:40.869499] ftl_layout.c: 775:ftl_layout_dump: *NOTICE*: [FTL][ftl0] Base device layout: 00:25:30.449 [2024-12-06 18:23:40.869509] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb_mirror 00:25:30.449 [2024-12-06 18:23:40.869522] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:25:30.449 [2024-12-06 18:23:40.869532] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:25:30.449 [2024-12-06 18:23:40.869542] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region vmap 00:25:30.449 [2024-12-06 18:23:40.869553] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 102400.25 MiB 00:25:30.449 [2024-12-06 18:23:40.869562] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 3.38 MiB 00:25:30.449 [2024-12-06 18:23:40.869571] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region data_btm 00:25:30.449 [2024-12-06 18:23:40.869580] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.25 MiB 00:25:30.449 [2024-12-06 18:23:40.869590] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 102400.00 MiB 00:25:30.449 [2024-12-06 18:23:40.869600] upgrade/ftl_sb_v5.c: 408:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - nvc: 00:25:30.449 [2024-12-06 18:23:40.869614] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x0 ver:5 blk_offs:0x0 blk_sz:0x20 00:25:30.449 [2024-12-06 18:23:40.869625] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x2 ver:0 blk_offs:0x20 blk_sz:0x5a00 00:25:30.449 [2024-12-06 18:23:40.869635] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x3 ver:2 blk_offs:0x5a20 blk_sz:0x80 00:25:30.449 [2024-12-06 18:23:40.869646] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x4 ver:2 blk_offs:0x5aa0 blk_sz:0x80 00:25:30.449 [2024-12-06 18:23:40.869656] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xa ver:2 blk_offs:0x5b20 blk_sz:0x800 00:25:30.449 [2024-12-06 18:23:40.869666] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xb ver:2 blk_offs:0x6320 blk_sz:0x800 00:25:30.450 [2024-12-06 18:23:40.869676] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xc ver:2 blk_offs:0x6b20 blk_sz:0x800 00:25:30.450 [2024-12-06 18:23:40.869686] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xd ver:2 blk_offs:0x7320 blk_sz:0x800 00:25:30.450 [2024-12-06 18:23:40.869697] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xe ver:0 blk_offs:0x7b20 blk_sz:0x40 00:25:30.450 [2024-12-06 18:23:40.869706] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xf ver:0 blk_offs:0x7b60 blk_sz:0x40 00:25:30.450 [2024-12-06 18:23:40.869716] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x10 ver:1 blk_offs:0x7ba0 blk_sz:0x20 00:25:30.450 [2024-12-06 18:23:40.869726] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x11 ver:1 blk_offs:0x7bc0 blk_sz:0x20 00:25:30.450 [2024-12-06 18:23:40.869737] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x6 ver:2 blk_offs:0x7be0 blk_sz:0x20 00:25:30.450 [2024-12-06 18:23:40.869747] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x7 ver:2 blk_offs:0x7c00 blk_sz:0x20 00:25:30.450 [2024-12-06 18:23:40.869757] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x7c20 blk_sz:0x13b6e0 00:25:30.450 [2024-12-06 18:23:40.869767] upgrade/ftl_sb_v5.c: 422:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - base dev: 00:25:30.450 [2024-12-06 18:23:40.869778] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x1 ver:5 blk_offs:0x0 blk_sz:0x20 00:25:30.450 [2024-12-06 18:23:40.869789] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x20 blk_sz:0x20 00:25:30.450 [2024-12-06 18:23:40.869800] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x9 ver:0 blk_offs:0x40 blk_sz:0x1900000 00:25:30.450 [2024-12-06 18:23:40.869810] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x5 ver:0 blk_offs:0x1900040 blk_sz:0x360 00:25:30.450 [2024-12-06 18:23:40.869820] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x19003a0 blk_sz:0x3fc60 00:25:30.450 [2024-12-06 18:23:40.869832] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:30.450 [2024-12-06 18:23:40.869845] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Layout upgrade 00:25:30.450 [2024-12-06 18:23:40.869855] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.825 ms 00:25:30.450 [2024-12-06 18:23:40.869865] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:30.450 [2024-12-06 18:23:40.907582] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:30.450 [2024-12-06 18:23:40.907624] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:25:30.450 [2024-12-06 18:23:40.907638] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 37.719 ms 00:25:30.450 [2024-12-06 18:23:40.907666] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:30.450 [2024-12-06 18:23:40.907794] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:30.450 [2024-12-06 18:23:40.907808] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize band addresses 00:25:30.450 [2024-12-06 18:23:40.907819] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.052 ms 00:25:30.450 [2024-12-06 18:23:40.907829] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:30.450 [2024-12-06 18:23:40.962335] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:30.450 [2024-12-06 18:23:40.962396] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:25:30.450 [2024-12-06 18:23:40.962431] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 54.568 ms 00:25:30.450 [2024-12-06 18:23:40.962442] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:30.450 [2024-12-06 18:23:40.962588] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:30.450 [2024-12-06 18:23:40.962601] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:25:30.450 [2024-12-06 18:23:40.962612] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.004 ms 00:25:30.450 [2024-12-06 18:23:40.962623] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:30.450 [2024-12-06 18:23:40.963053] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:30.450 [2024-12-06 18:23:40.963067] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:25:30.450 [2024-12-06 18:23:40.963084] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.408 ms 00:25:30.450 [2024-12-06 18:23:40.963094] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:30.450 [2024-12-06 18:23:40.963217] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:30.450 [2024-12-06 18:23:40.963230] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:25:30.450 [2024-12-06 18:23:40.963241] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.099 ms 00:25:30.450 [2024-12-06 18:23:40.963251] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:30.450 [2024-12-06 18:23:40.982362] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:30.450 [2024-12-06 18:23:40.982560] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:25:30.450 [2024-12-06 18:23:40.982583] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 19.117 ms 00:25:30.450 [2024-12-06 18:23:40.982594] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:30.450 [2024-12-06 18:23:41.001521] ftl_nv_cache.c:1772:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: full chunks = 2, empty chunks = 2 00:25:30.450 [2024-12-06 18:23:41.001559] ftl_nv_cache.c:1776:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: state loaded successfully 00:25:30.450 [2024-12-06 18:23:41.001575] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:30.450 [2024-12-06 18:23:41.001602] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore NV cache metadata 00:25:30.450 [2024-12-06 18:23:41.001615] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 18.889 ms 00:25:30.450 [2024-12-06 18:23:41.001625] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:30.710 [2024-12-06 18:23:41.030958] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:30.710 [2024-12-06 18:23:41.030999] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore valid map metadata 00:25:30.710 [2024-12-06 18:23:41.031013] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 29.299 ms 00:25:30.710 [2024-12-06 18:23:41.031024] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:30.710 [2024-12-06 18:23:41.048973] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:30.710 [2024-12-06 18:23:41.049023] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore band info metadata 00:25:30.710 [2024-12-06 18:23:41.049036] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 17.896 ms 00:25:30.710 [2024-12-06 18:23:41.049046] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:30.710 [2024-12-06 18:23:41.067208] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:30.710 [2024-12-06 18:23:41.067245] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore trim metadata 00:25:30.710 [2024-12-06 18:23:41.067258] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 18.113 ms 00:25:30.710 [2024-12-06 18:23:41.067282] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:30.710 [2024-12-06 18:23:41.068095] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:30.710 [2024-12-06 18:23:41.068127] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize P2L checkpointing 00:25:30.710 [2024-12-06 18:23:41.068140] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.695 ms 00:25:30.710 [2024-12-06 18:23:41.068151] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:30.710 [2024-12-06 18:23:41.154737] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:30.710 [2024-12-06 18:23:41.154819] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore P2L checkpoints 00:25:30.710 [2024-12-06 18:23:41.154837] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 86.695 ms 00:25:30.710 [2024-12-06 18:23:41.154849] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:30.710 [2024-12-06 18:23:41.166912] ftl_l2p_cache.c: 458:ftl_l2p_cache_init: *NOTICE*: l2p maximum resident size is: 59 (of 60) MiB 00:25:30.710 [2024-12-06 18:23:41.183519] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:30.710 [2024-12-06 18:23:41.183582] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize L2P 00:25:30.710 [2024-12-06 18:23:41.183598] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 28.608 ms 00:25:30.710 [2024-12-06 18:23:41.183616] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:30.710 [2024-12-06 18:23:41.183755] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:30.710 [2024-12-06 18:23:41.183769] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore L2P 00:25:30.710 [2024-12-06 18:23:41.183780] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.006 ms 00:25:30.710 [2024-12-06 18:23:41.183791] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:30.710 [2024-12-06 18:23:41.183847] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:30.710 [2024-12-06 18:23:41.183859] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize band initialization 00:25:30.710 [2024-12-06 18:23:41.183870] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.032 ms 00:25:30.710 [2024-12-06 18:23:41.183884] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:30.710 [2024-12-06 18:23:41.183921] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:30.710 [2024-12-06 18:23:41.183935] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Start core poller 00:25:30.710 [2024-12-06 18:23:41.183946] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.014 ms 00:25:30.710 [2024-12-06 18:23:41.183956] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:30.710 [2024-12-06 18:23:41.183995] mngt/ftl_mngt_self_test.c: 208:ftl_mngt_self_test: *NOTICE*: [FTL][ftl0] Self test skipped 00:25:30.710 [2024-12-06 18:23:41.184007] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:30.710 [2024-12-06 18:23:41.184017] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Self test on startup 00:25:30.710 [2024-12-06 18:23:41.184027] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.014 ms 00:25:30.710 [2024-12-06 18:23:41.184037] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:30.710 [2024-12-06 18:23:41.221150] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:30.710 [2024-12-06 18:23:41.221385] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL dirty state 00:25:30.710 [2024-12-06 18:23:41.221409] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 37.149 ms 00:25:30.710 [2024-12-06 18:23:41.221422] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:30.710 [2024-12-06 18:23:41.221582] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:30.710 [2024-12-06 18:23:41.221598] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize initialization 00:25:30.710 [2024-12-06 18:23:41.221609] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.035 ms 00:25:30.710 [2024-12-06 18:23:41.221619] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:30.710 [2024-12-06 18:23:41.222536] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:25:30.710 [2024-12-06 18:23:41.226758] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL startup', duration = 392.748 ms, result 0 00:25:30.710 [2024-12-06 18:23:41.227571] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:25:30.710 [2024-12-06 18:23:41.246061] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:25:32.089  [2024-12-06T18:23:43.642Z] Copying: 29/256 [MB] (29 MBps) [2024-12-06T18:23:44.582Z] Copying: 55/256 [MB] (25 MBps) [2024-12-06T18:23:45.519Z] Copying: 79/256 [MB] (24 MBps) [2024-12-06T18:23:46.455Z] Copying: 104/256 [MB] (24 MBps) [2024-12-06T18:23:47.393Z] Copying: 129/256 [MB] (25 MBps) [2024-12-06T18:23:48.331Z] Copying: 153/256 [MB] (24 MBps) [2024-12-06T18:23:49.709Z] Copying: 179/256 [MB] (25 MBps) [2024-12-06T18:23:50.645Z] Copying: 205/256 [MB] (25 MBps) [2024-12-06T18:23:51.586Z] Copying: 230/256 [MB] (25 MBps) [2024-12-06T18:23:51.586Z] Copying: 255/256 [MB] (25 MBps) [2024-12-06T18:23:51.845Z] Copying: 256/256 [MB] (average 25 MBps)[2024-12-06 18:23:51.739467] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:25:41.269 [2024-12-06 18:23:51.759517] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:41.269 [2024-12-06 18:23:51.759566] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinit core IO channel 00:25:41.269 [2024-12-06 18:23:51.759590] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.005 ms 00:25:41.269 [2024-12-06 18:23:51.759600] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:41.269 [2024-12-06 18:23:51.759629] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on ftl_core_thread 00:25:41.269 [2024-12-06 18:23:51.764111] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:41.269 [2024-12-06 18:23:51.764146] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Unregister IO device 00:25:41.269 [2024-12-06 18:23:51.764159] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.471 ms 00:25:41.269 [2024-12-06 18:23:51.764169] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:41.269 [2024-12-06 18:23:51.764435] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:41.269 [2024-12-06 18:23:51.764449] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Stop core poller 00:25:41.269 [2024-12-06 18:23:51.764461] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.235 ms 00:25:41.269 [2024-12-06 18:23:51.764471] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:41.269 [2024-12-06 18:23:51.767355] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:41.269 [2024-12-06 18:23:51.767379] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist L2P 00:25:41.269 [2024-12-06 18:23:51.767391] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.867 ms 00:25:41.269 [2024-12-06 18:23:51.767402] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:41.269 [2024-12-06 18:23:51.773312] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:41.269 [2024-12-06 18:23:51.773482] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finish L2P trims 00:25:41.269 [2024-12-06 18:23:51.773505] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 5.898 ms 00:25:41.269 [2024-12-06 18:23:51.773516] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:41.269 [2024-12-06 18:23:51.810768] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:41.269 [2024-12-06 18:23:51.810810] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist NV cache metadata 00:25:41.269 [2024-12-06 18:23:51.810825] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 37.223 ms 00:25:41.269 [2024-12-06 18:23:51.810835] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:41.269 [2024-12-06 18:23:51.832092] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:41.269 [2024-12-06 18:23:51.832133] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist valid map metadata 00:25:41.269 [2024-12-06 18:23:51.832154] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 21.246 ms 00:25:41.269 [2024-12-06 18:23:51.832164] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:41.269 [2024-12-06 18:23:51.832330] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:41.269 [2024-12-06 18:23:51.832345] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist P2L metadata 00:25:41.269 [2024-12-06 18:23:51.832368] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.103 ms 00:25:41.269 [2024-12-06 18:23:51.832378] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:41.530 [2024-12-06 18:23:51.869036] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:41.530 [2024-12-06 18:23:51.869091] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist band info metadata 00:25:41.530 [2024-12-06 18:23:51.869106] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 36.682 ms 00:25:41.530 [2024-12-06 18:23:51.869116] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:41.530 [2024-12-06 18:23:51.905231] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:41.530 [2024-12-06 18:23:51.905281] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist trim metadata 00:25:41.530 [2024-12-06 18:23:51.905295] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 36.130 ms 00:25:41.530 [2024-12-06 18:23:51.905305] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:41.530 [2024-12-06 18:23:51.940214] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:41.530 [2024-12-06 18:23:51.940253] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist superblock 00:25:41.530 [2024-12-06 18:23:51.940278] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 34.922 ms 00:25:41.530 [2024-12-06 18:23:51.940289] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:41.530 [2024-12-06 18:23:51.975233] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:41.530 [2024-12-06 18:23:51.975279] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL clean state 00:25:41.530 [2024-12-06 18:23:51.975292] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 34.932 ms 00:25:41.530 [2024-12-06 18:23:51.975302] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:41.530 [2024-12-06 18:23:51.975343] ftl_debug.c: 165:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Bands validity: 00:25:41.530 [2024-12-06 18:23:51.975359] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 1: 0 / 261120 wr_cnt: 0 state: free 00:25:41.530 [2024-12-06 18:23:51.975372] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 2: 0 / 261120 wr_cnt: 0 state: free 00:25:41.530 [2024-12-06 18:23:51.975384] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 3: 0 / 261120 wr_cnt: 0 state: free 00:25:41.530 [2024-12-06 18:23:51.975395] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 4: 0 / 261120 wr_cnt: 0 state: free 00:25:41.530 [2024-12-06 18:23:51.975406] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 5: 0 / 261120 wr_cnt: 0 state: free 00:25:41.530 [2024-12-06 18:23:51.975417] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 6: 0 / 261120 wr_cnt: 0 state: free 00:25:41.530 [2024-12-06 18:23:51.975428] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 7: 0 / 261120 wr_cnt: 0 state: free 00:25:41.530 [2024-12-06 18:23:51.975439] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 8: 0 / 261120 wr_cnt: 0 state: free 00:25:41.530 [2024-12-06 18:23:51.975450] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 9: 0 / 261120 wr_cnt: 0 state: free 00:25:41.530 [2024-12-06 18:23:51.975461] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 10: 0 / 261120 wr_cnt: 0 state: free 00:25:41.530 [2024-12-06 18:23:51.975471] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 11: 0 / 261120 wr_cnt: 0 state: free 00:25:41.530 [2024-12-06 18:23:51.975482] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 12: 0 / 261120 wr_cnt: 0 state: free 00:25:41.530 [2024-12-06 18:23:51.975493] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 13: 0 / 261120 wr_cnt: 0 state: free 00:25:41.530 [2024-12-06 18:23:51.975504] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 14: 0 / 261120 wr_cnt: 0 state: free 00:25:41.530 [2024-12-06 18:23:51.975514] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 15: 0 / 261120 wr_cnt: 0 state: free 00:25:41.530 [2024-12-06 18:23:51.975526] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 16: 0 / 261120 wr_cnt: 0 state: free 00:25:41.530 [2024-12-06 18:23:51.975536] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 17: 0 / 261120 wr_cnt: 0 state: free 00:25:41.530 [2024-12-06 18:23:51.975547] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 18: 0 / 261120 wr_cnt: 0 state: free 00:25:41.530 [2024-12-06 18:23:51.975558] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 19: 0 / 261120 wr_cnt: 0 state: free 00:25:41.530 [2024-12-06 18:23:51.975570] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 20: 0 / 261120 wr_cnt: 0 state: free 00:25:41.530 [2024-12-06 18:23:51.975581] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 21: 0 / 261120 wr_cnt: 0 state: free 00:25:41.530 [2024-12-06 18:23:51.975591] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 22: 0 / 261120 wr_cnt: 0 state: free 00:25:41.530 [2024-12-06 18:23:51.975601] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 23: 0 / 261120 wr_cnt: 0 state: free 00:25:41.530 [2024-12-06 18:23:51.975612] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 24: 0 / 261120 wr_cnt: 0 state: free 00:25:41.530 [2024-12-06 18:23:51.975623] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 25: 0 / 261120 wr_cnt: 0 state: free 00:25:41.530 [2024-12-06 18:23:51.975633] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 26: 0 / 261120 wr_cnt: 0 state: free 00:25:41.530 [2024-12-06 18:23:51.975643] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 27: 0 / 261120 wr_cnt: 0 state: free 00:25:41.530 [2024-12-06 18:23:51.975653] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 28: 0 / 261120 wr_cnt: 0 state: free 00:25:41.530 [2024-12-06 18:23:51.975664] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 29: 0 / 261120 wr_cnt: 0 state: free 00:25:41.530 [2024-12-06 18:23:51.975675] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 30: 0 / 261120 wr_cnt: 0 state: free 00:25:41.530 [2024-12-06 18:23:51.975686] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 31: 0 / 261120 wr_cnt: 0 state: free 00:25:41.530 [2024-12-06 18:23:51.975697] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 32: 0 / 261120 wr_cnt: 0 state: free 00:25:41.530 [2024-12-06 18:23:51.975707] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 33: 0 / 261120 wr_cnt: 0 state: free 00:25:41.530 [2024-12-06 18:23:51.975718] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 34: 0 / 261120 wr_cnt: 0 state: free 00:25:41.530 [2024-12-06 18:23:51.975728] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 35: 0 / 261120 wr_cnt: 0 state: free 00:25:41.531 [2024-12-06 18:23:51.975739] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 36: 0 / 261120 wr_cnt: 0 state: free 00:25:41.531 [2024-12-06 18:23:51.975749] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 37: 0 / 261120 wr_cnt: 0 state: free 00:25:41.531 [2024-12-06 18:23:51.975759] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 38: 0 / 261120 wr_cnt: 0 state: free 00:25:41.531 [2024-12-06 18:23:51.975770] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 39: 0 / 261120 wr_cnt: 0 state: free 00:25:41.531 [2024-12-06 18:23:51.975780] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 40: 0 / 261120 wr_cnt: 0 state: free 00:25:41.531 [2024-12-06 18:23:51.975791] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 41: 0 / 261120 wr_cnt: 0 state: free 00:25:41.531 [2024-12-06 18:23:51.975801] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 42: 0 / 261120 wr_cnt: 0 state: free 00:25:41.531 [2024-12-06 18:23:51.975811] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 43: 0 / 261120 wr_cnt: 0 state: free 00:25:41.531 [2024-12-06 18:23:51.975821] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 44: 0 / 261120 wr_cnt: 0 state: free 00:25:41.531 [2024-12-06 18:23:51.975832] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 45: 0 / 261120 wr_cnt: 0 state: free 00:25:41.531 [2024-12-06 18:23:51.975842] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 46: 0 / 261120 wr_cnt: 0 state: free 00:25:41.531 [2024-12-06 18:23:51.975852] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 47: 0 / 261120 wr_cnt: 0 state: free 00:25:41.531 [2024-12-06 18:23:51.975863] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 48: 0 / 261120 wr_cnt: 0 state: free 00:25:41.531 [2024-12-06 18:23:51.975874] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 49: 0 / 261120 wr_cnt: 0 state: free 00:25:41.531 [2024-12-06 18:23:51.975884] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 50: 0 / 261120 wr_cnt: 0 state: free 00:25:41.531 [2024-12-06 18:23:51.975895] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 51: 0 / 261120 wr_cnt: 0 state: free 00:25:41.531 [2024-12-06 18:23:51.975906] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 52: 0 / 261120 wr_cnt: 0 state: free 00:25:41.531 [2024-12-06 18:23:51.975917] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 53: 0 / 261120 wr_cnt: 0 state: free 00:25:41.531 [2024-12-06 18:23:51.975927] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 54: 0 / 261120 wr_cnt: 0 state: free 00:25:41.531 [2024-12-06 18:23:51.975937] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 55: 0 / 261120 wr_cnt: 0 state: free 00:25:41.531 [2024-12-06 18:23:51.975948] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 56: 0 / 261120 wr_cnt: 0 state: free 00:25:41.531 [2024-12-06 18:23:51.975958] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 57: 0 / 261120 wr_cnt: 0 state: free 00:25:41.531 [2024-12-06 18:23:51.975968] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 58: 0 / 261120 wr_cnt: 0 state: free 00:25:41.531 [2024-12-06 18:23:51.975979] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 59: 0 / 261120 wr_cnt: 0 state: free 00:25:41.531 [2024-12-06 18:23:51.975989] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 60: 0 / 261120 wr_cnt: 0 state: free 00:25:41.531 [2024-12-06 18:23:51.975999] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 61: 0 / 261120 wr_cnt: 0 state: free 00:25:41.531 [2024-12-06 18:23:51.976009] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 62: 0 / 261120 wr_cnt: 0 state: free 00:25:41.531 [2024-12-06 18:23:51.976022] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 63: 0 / 261120 wr_cnt: 0 state: free 00:25:41.531 [2024-12-06 18:23:51.976033] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 64: 0 / 261120 wr_cnt: 0 state: free 00:25:41.531 [2024-12-06 18:23:51.976044] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 65: 0 / 261120 wr_cnt: 0 state: free 00:25:41.531 [2024-12-06 18:23:51.976054] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 66: 0 / 261120 wr_cnt: 0 state: free 00:25:41.531 [2024-12-06 18:23:51.976065] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 67: 0 / 261120 wr_cnt: 0 state: free 00:25:41.531 [2024-12-06 18:23:51.976075] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 68: 0 / 261120 wr_cnt: 0 state: free 00:25:41.531 [2024-12-06 18:23:51.976086] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 69: 0 / 261120 wr_cnt: 0 state: free 00:25:41.531 [2024-12-06 18:23:51.976096] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 70: 0 / 261120 wr_cnt: 0 state: free 00:25:41.531 [2024-12-06 18:23:51.976107] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 71: 0 / 261120 wr_cnt: 0 state: free 00:25:41.531 [2024-12-06 18:23:51.976117] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 72: 0 / 261120 wr_cnt: 0 state: free 00:25:41.531 [2024-12-06 18:23:51.976127] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 73: 0 / 261120 wr_cnt: 0 state: free 00:25:41.531 [2024-12-06 18:23:51.976138] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 74: 0 / 261120 wr_cnt: 0 state: free 00:25:41.531 [2024-12-06 18:23:51.976148] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 75: 0 / 261120 wr_cnt: 0 state: free 00:25:41.531 [2024-12-06 18:23:51.976159] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 76: 0 / 261120 wr_cnt: 0 state: free 00:25:41.531 [2024-12-06 18:23:51.976169] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 77: 0 / 261120 wr_cnt: 0 state: free 00:25:41.531 [2024-12-06 18:23:51.976180] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 78: 0 / 261120 wr_cnt: 0 state: free 00:25:41.531 [2024-12-06 18:23:51.976190] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 79: 0 / 261120 wr_cnt: 0 state: free 00:25:41.531 [2024-12-06 18:23:51.976200] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 80: 0 / 261120 wr_cnt: 0 state: free 00:25:41.531 [2024-12-06 18:23:51.976210] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 81: 0 / 261120 wr_cnt: 0 state: free 00:25:41.531 [2024-12-06 18:23:51.976221] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 82: 0 / 261120 wr_cnt: 0 state: free 00:25:41.531 [2024-12-06 18:23:51.976231] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 83: 0 / 261120 wr_cnt: 0 state: free 00:25:41.531 [2024-12-06 18:23:51.976242] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 84: 0 / 261120 wr_cnt: 0 state: free 00:25:41.531 [2024-12-06 18:23:51.976253] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 85: 0 / 261120 wr_cnt: 0 state: free 00:25:41.531 [2024-12-06 18:23:51.976272] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 86: 0 / 261120 wr_cnt: 0 state: free 00:25:41.531 [2024-12-06 18:23:51.976284] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 87: 0 / 261120 wr_cnt: 0 state: free 00:25:41.531 [2024-12-06 18:23:51.976294] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 88: 0 / 261120 wr_cnt: 0 state: free 00:25:41.531 [2024-12-06 18:23:51.976305] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 89: 0 / 261120 wr_cnt: 0 state: free 00:25:41.531 [2024-12-06 18:23:51.976315] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 90: 0 / 261120 wr_cnt: 0 state: free 00:25:41.531 [2024-12-06 18:23:51.976326] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 91: 0 / 261120 wr_cnt: 0 state: free 00:25:41.531 [2024-12-06 18:23:51.976336] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 92: 0 / 261120 wr_cnt: 0 state: free 00:25:41.531 [2024-12-06 18:23:51.976347] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 93: 0 / 261120 wr_cnt: 0 state: free 00:25:41.531 [2024-12-06 18:23:51.976370] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 94: 0 / 261120 wr_cnt: 0 state: free 00:25:41.531 [2024-12-06 18:23:51.976381] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 95: 0 / 261120 wr_cnt: 0 state: free 00:25:41.531 [2024-12-06 18:23:51.976391] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 96: 0 / 261120 wr_cnt: 0 state: free 00:25:41.531 [2024-12-06 18:23:51.976402] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 97: 0 / 261120 wr_cnt: 0 state: free 00:25:41.531 [2024-12-06 18:23:51.976413] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 98: 0 / 261120 wr_cnt: 0 state: free 00:25:41.531 [2024-12-06 18:23:51.976424] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 99: 0 / 261120 wr_cnt: 0 state: free 00:25:41.531 [2024-12-06 18:23:51.976434] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 100: 0 / 261120 wr_cnt: 0 state: free 00:25:41.531 [2024-12-06 18:23:51.976451] ftl_debug.c: 211:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] 00:25:41.531 [2024-12-06 18:23:51.976461] ftl_debug.c: 212:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] device UUID: 0182d045-795a-443b-ad13-478c5d3e8b79 00:25:41.531 [2024-12-06 18:23:51.976472] ftl_debug.c: 213:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total valid LBAs: 0 00:25:41.531 [2024-12-06 18:23:51.976482] ftl_debug.c: 214:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total writes: 960 00:25:41.531 [2024-12-06 18:23:51.976492] ftl_debug.c: 215:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] user writes: 0 00:25:41.531 [2024-12-06 18:23:51.976502] ftl_debug.c: 216:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] WAF: inf 00:25:41.531 [2024-12-06 18:23:51.976511] ftl_debug.c: 218:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] limits: 00:25:41.531 [2024-12-06 18:23:51.976521] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] crit: 0 00:25:41.531 [2024-12-06 18:23:51.976535] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] high: 0 00:25:41.531 [2024-12-06 18:23:51.976543] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] low: 0 00:25:41.531 [2024-12-06 18:23:51.976552] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] start: 0 00:25:41.531 [2024-12-06 18:23:51.976562] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:41.531 [2024-12-06 18:23:51.976573] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Dump statistics 00:25:41.531 [2024-12-06 18:23:51.976584] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.222 ms 00:25:41.531 [2024-12-06 18:23:51.976594] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:41.531 [2024-12-06 18:23:51.996556] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:41.531 [2024-12-06 18:23:51.996720] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize L2P 00:25:41.531 [2024-12-06 18:23:51.996741] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 19.972 ms 00:25:41.531 [2024-12-06 18:23:51.996752] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:41.531 [2024-12-06 18:23:51.997315] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:41.531 [2024-12-06 18:23:51.997329] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize P2L checkpointing 00:25:41.531 [2024-12-06 18:23:51.997340] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.517 ms 00:25:41.531 [2024-12-06 18:23:51.997350] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:41.531 [2024-12-06 18:23:52.051984] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:25:41.531 [2024-12-06 18:23:52.052022] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:25:41.531 [2024-12-06 18:23:52.052035] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:25:41.531 [2024-12-06 18:23:52.052050] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:41.531 [2024-12-06 18:23:52.052124] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:25:41.532 [2024-12-06 18:23:52.052136] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:25:41.532 [2024-12-06 18:23:52.052147] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:25:41.532 [2024-12-06 18:23:52.052157] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:41.532 [2024-12-06 18:23:52.052206] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:25:41.532 [2024-12-06 18:23:52.052219] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:25:41.532 [2024-12-06 18:23:52.052229] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:25:41.532 [2024-12-06 18:23:52.052239] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:41.532 [2024-12-06 18:23:52.052279] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:25:41.532 [2024-12-06 18:23:52.052291] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:25:41.532 [2024-12-06 18:23:52.052317] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:25:41.532 [2024-12-06 18:23:52.052327] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:41.790 [2024-12-06 18:23:52.177353] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:25:41.790 [2024-12-06 18:23:52.177411] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:25:41.790 [2024-12-06 18:23:52.177426] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:25:41.790 [2024-12-06 18:23:52.177437] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:41.790 [2024-12-06 18:23:52.278908] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:25:41.790 [2024-12-06 18:23:52.278963] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:25:41.790 [2024-12-06 18:23:52.278978] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:25:41.790 [2024-12-06 18:23:52.278989] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:41.790 [2024-12-06 18:23:52.279079] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:25:41.790 [2024-12-06 18:23:52.279091] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:25:41.790 [2024-12-06 18:23:52.279102] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:25:41.790 [2024-12-06 18:23:52.279113] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:41.790 [2024-12-06 18:23:52.279143] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:25:41.791 [2024-12-06 18:23:52.279160] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:25:41.791 [2024-12-06 18:23:52.279170] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:25:41.791 [2024-12-06 18:23:52.279180] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:41.791 [2024-12-06 18:23:52.279303] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:25:41.791 [2024-12-06 18:23:52.279318] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:25:41.791 [2024-12-06 18:23:52.279329] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:25:41.791 [2024-12-06 18:23:52.279339] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:41.791 [2024-12-06 18:23:52.279378] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:25:41.791 [2024-12-06 18:23:52.279391] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize superblock 00:25:41.791 [2024-12-06 18:23:52.279405] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:25:41.791 [2024-12-06 18:23:52.279416] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:41.791 [2024-12-06 18:23:52.279455] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:25:41.791 [2024-12-06 18:23:52.279466] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:25:41.791 [2024-12-06 18:23:52.279476] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:25:41.791 [2024-12-06 18:23:52.279486] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:41.791 [2024-12-06 18:23:52.279529] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:25:41.791 [2024-12-06 18:23:52.279545] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:25:41.791 [2024-12-06 18:23:52.279555] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:25:41.791 [2024-12-06 18:23:52.279565] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:41.791 [2024-12-06 18:23:52.279709] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL shutdown', duration = 521.038 ms, result 0 00:25:43.170 00:25:43.170 00:25:43.170 18:23:53 ftl.ftl_trim -- ftl/trim.sh@106 -- # md5sum -c /home/vagrant/spdk_repo/spdk/test/ftl/testfile.md5 00:25:43.429 /home/vagrant/spdk_repo/spdk/test/ftl/data: OK 00:25:43.429 18:23:53 ftl.ftl_trim -- ftl/trim.sh@108 -- # trap - SIGINT SIGTERM EXIT 00:25:43.430 18:23:53 ftl.ftl_trim -- ftl/trim.sh@109 -- # fio_kill 00:25:43.430 18:23:53 ftl.ftl_trim -- ftl/trim.sh@15 -- # rm -f /home/vagrant/spdk_repo/spdk/test/ftl/testfile.md5 00:25:43.430 18:23:53 ftl.ftl_trim -- ftl/trim.sh@16 -- # rm -f /home/vagrant/spdk_repo/spdk/test/ftl/config/ftl.json 00:25:43.430 18:23:53 ftl.ftl_trim -- ftl/trim.sh@17 -- # rm -f /home/vagrant/spdk_repo/spdk/test/ftl/random_pattern 00:25:43.430 18:23:53 ftl.ftl_trim -- ftl/trim.sh@18 -- # rm -f /home/vagrant/spdk_repo/spdk/test/ftl/data 00:25:43.430 18:23:53 ftl.ftl_trim -- ftl/trim.sh@20 -- # killprocess 78733 00:25:43.430 18:23:53 ftl.ftl_trim -- common/autotest_common.sh@954 -- # '[' -z 78733 ']' 00:25:43.430 18:23:53 ftl.ftl_trim -- common/autotest_common.sh@958 -- # kill -0 78733 00:25:43.430 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 958: kill: (78733) - No such process 00:25:43.430 Process with pid 78733 is not found 00:25:43.430 18:23:53 ftl.ftl_trim -- common/autotest_common.sh@981 -- # echo 'Process with pid 78733 is not found' 00:25:43.430 00:25:43.430 real 1m11.568s 00:25:43.430 user 1m36.813s 00:25:43.430 sys 0m6.864s 00:25:43.430 ************************************ 00:25:43.430 END TEST ftl_trim 00:25:43.430 ************************************ 00:25:43.430 18:23:53 ftl.ftl_trim -- common/autotest_common.sh@1130 -- # xtrace_disable 00:25:43.430 18:23:53 ftl.ftl_trim -- common/autotest_common.sh@10 -- # set +x 00:25:43.430 18:23:53 ftl -- ftl/ftl.sh@76 -- # run_test ftl_restore /home/vagrant/spdk_repo/spdk/test/ftl/restore.sh -c 0000:00:10.0 0000:00:11.0 00:25:43.430 18:23:53 ftl -- common/autotest_common.sh@1105 -- # '[' 5 -le 1 ']' 00:25:43.430 18:23:53 ftl -- common/autotest_common.sh@1111 -- # xtrace_disable 00:25:43.430 18:23:53 ftl -- common/autotest_common.sh@10 -- # set +x 00:25:43.430 ************************************ 00:25:43.430 START TEST ftl_restore 00:25:43.430 ************************************ 00:25:43.430 18:23:53 ftl.ftl_restore -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/ftl/restore.sh -c 0000:00:10.0 0000:00:11.0 00:25:43.690 * Looking for test storage... 00:25:43.690 * Found test storage at /home/vagrant/spdk_repo/spdk/test/ftl 00:25:43.690 18:23:54 ftl.ftl_restore -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:25:43.690 18:23:54 ftl.ftl_restore -- common/autotest_common.sh@1711 -- # lcov --version 00:25:43.690 18:23:54 ftl.ftl_restore -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:25:43.690 18:23:54 ftl.ftl_restore -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:25:43.690 18:23:54 ftl.ftl_restore -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:25:43.690 18:23:54 ftl.ftl_restore -- scripts/common.sh@333 -- # local ver1 ver1_l 00:25:43.690 18:23:54 ftl.ftl_restore -- scripts/common.sh@334 -- # local ver2 ver2_l 00:25:43.690 18:23:54 ftl.ftl_restore -- scripts/common.sh@336 -- # IFS=.-: 00:25:43.690 18:23:54 ftl.ftl_restore -- scripts/common.sh@336 -- # read -ra ver1 00:25:43.690 18:23:54 ftl.ftl_restore -- scripts/common.sh@337 -- # IFS=.-: 00:25:43.690 18:23:54 ftl.ftl_restore -- scripts/common.sh@337 -- # read -ra ver2 00:25:43.690 18:23:54 ftl.ftl_restore -- scripts/common.sh@338 -- # local 'op=<' 00:25:43.690 18:23:54 ftl.ftl_restore -- scripts/common.sh@340 -- # ver1_l=2 00:25:43.690 18:23:54 ftl.ftl_restore -- scripts/common.sh@341 -- # ver2_l=1 00:25:43.690 18:23:54 ftl.ftl_restore -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:25:43.690 18:23:54 ftl.ftl_restore -- scripts/common.sh@344 -- # case "$op" in 00:25:43.690 18:23:54 ftl.ftl_restore -- scripts/common.sh@345 -- # : 1 00:25:43.690 18:23:54 ftl.ftl_restore -- scripts/common.sh@364 -- # (( v = 0 )) 00:25:43.690 18:23:54 ftl.ftl_restore -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:25:43.690 18:23:54 ftl.ftl_restore -- scripts/common.sh@365 -- # decimal 1 00:25:43.690 18:23:54 ftl.ftl_restore -- scripts/common.sh@353 -- # local d=1 00:25:43.690 18:23:54 ftl.ftl_restore -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:25:43.690 18:23:54 ftl.ftl_restore -- scripts/common.sh@355 -- # echo 1 00:25:43.690 18:23:54 ftl.ftl_restore -- scripts/common.sh@365 -- # ver1[v]=1 00:25:43.690 18:23:54 ftl.ftl_restore -- scripts/common.sh@366 -- # decimal 2 00:25:43.690 18:23:54 ftl.ftl_restore -- scripts/common.sh@353 -- # local d=2 00:25:43.690 18:23:54 ftl.ftl_restore -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:25:43.690 18:23:54 ftl.ftl_restore -- scripts/common.sh@355 -- # echo 2 00:25:43.690 18:23:54 ftl.ftl_restore -- scripts/common.sh@366 -- # ver2[v]=2 00:25:43.690 18:23:54 ftl.ftl_restore -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:25:43.690 18:23:54 ftl.ftl_restore -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:25:43.690 18:23:54 ftl.ftl_restore -- scripts/common.sh@368 -- # return 0 00:25:43.690 18:23:54 ftl.ftl_restore -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:25:43.690 18:23:54 ftl.ftl_restore -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:25:43.690 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:25:43.690 --rc genhtml_branch_coverage=1 00:25:43.690 --rc genhtml_function_coverage=1 00:25:43.690 --rc genhtml_legend=1 00:25:43.690 --rc geninfo_all_blocks=1 00:25:43.690 --rc geninfo_unexecuted_blocks=1 00:25:43.690 00:25:43.690 ' 00:25:43.690 18:23:54 ftl.ftl_restore -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:25:43.690 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:25:43.690 --rc genhtml_branch_coverage=1 00:25:43.690 --rc genhtml_function_coverage=1 00:25:43.690 --rc genhtml_legend=1 00:25:43.690 --rc geninfo_all_blocks=1 00:25:43.690 --rc geninfo_unexecuted_blocks=1 00:25:43.690 00:25:43.690 ' 00:25:43.690 18:23:54 ftl.ftl_restore -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:25:43.690 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:25:43.690 --rc genhtml_branch_coverage=1 00:25:43.690 --rc genhtml_function_coverage=1 00:25:43.690 --rc genhtml_legend=1 00:25:43.690 --rc geninfo_all_blocks=1 00:25:43.690 --rc geninfo_unexecuted_blocks=1 00:25:43.690 00:25:43.690 ' 00:25:43.690 18:23:54 ftl.ftl_restore -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:25:43.690 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:25:43.690 --rc genhtml_branch_coverage=1 00:25:43.690 --rc genhtml_function_coverage=1 00:25:43.690 --rc genhtml_legend=1 00:25:43.690 --rc geninfo_all_blocks=1 00:25:43.690 --rc geninfo_unexecuted_blocks=1 00:25:43.690 00:25:43.690 ' 00:25:43.690 18:23:54 ftl.ftl_restore -- ftl/restore.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/ftl/common.sh 00:25:43.690 18:23:54 ftl.ftl_restore -- ftl/common.sh@8 -- # dirname /home/vagrant/spdk_repo/spdk/test/ftl/restore.sh 00:25:43.690 18:23:54 ftl.ftl_restore -- ftl/common.sh@8 -- # readlink -f /home/vagrant/spdk_repo/spdk/test/ftl 00:25:43.690 18:23:54 ftl.ftl_restore -- ftl/common.sh@8 -- # testdir=/home/vagrant/spdk_repo/spdk/test/ftl 00:25:43.690 18:23:54 ftl.ftl_restore -- ftl/common.sh@9 -- # readlink -f /home/vagrant/spdk_repo/spdk/test/ftl/../.. 00:25:43.690 18:23:54 ftl.ftl_restore -- ftl/common.sh@9 -- # rootdir=/home/vagrant/spdk_repo/spdk 00:25:43.690 18:23:54 ftl.ftl_restore -- ftl/common.sh@10 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:25:43.690 18:23:54 ftl.ftl_restore -- ftl/common.sh@12 -- # export 'ftl_tgt_core_mask=[0]' 00:25:43.690 18:23:54 ftl.ftl_restore -- ftl/common.sh@12 -- # ftl_tgt_core_mask='[0]' 00:25:43.690 18:23:54 ftl.ftl_restore -- ftl/common.sh@14 -- # export spdk_tgt_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:25:43.690 18:23:54 ftl.ftl_restore -- ftl/common.sh@14 -- # spdk_tgt_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:25:43.690 18:23:54 ftl.ftl_restore -- ftl/common.sh@15 -- # export 'spdk_tgt_cpumask=[0]' 00:25:43.690 18:23:54 ftl.ftl_restore -- ftl/common.sh@15 -- # spdk_tgt_cpumask='[0]' 00:25:43.690 18:23:54 ftl.ftl_restore -- ftl/common.sh@16 -- # export spdk_tgt_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json 00:25:43.690 18:23:54 ftl.ftl_restore -- ftl/common.sh@16 -- # spdk_tgt_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json 00:25:43.690 18:23:54 ftl.ftl_restore -- ftl/common.sh@17 -- # export spdk_tgt_pid= 00:25:43.690 18:23:54 ftl.ftl_restore -- ftl/common.sh@17 -- # spdk_tgt_pid= 00:25:43.690 18:23:54 ftl.ftl_restore -- ftl/common.sh@19 -- # export spdk_ini_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:25:43.690 18:23:54 ftl.ftl_restore -- ftl/common.sh@19 -- # spdk_ini_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:25:43.690 18:23:54 ftl.ftl_restore -- ftl/common.sh@20 -- # export 'spdk_ini_cpumask=[1]' 00:25:43.690 18:23:54 ftl.ftl_restore -- ftl/common.sh@20 -- # spdk_ini_cpumask='[1]' 00:25:43.690 18:23:54 ftl.ftl_restore -- ftl/common.sh@21 -- # export spdk_ini_rpc=/var/tmp/spdk.tgt.sock 00:25:43.690 18:23:54 ftl.ftl_restore -- ftl/common.sh@21 -- # spdk_ini_rpc=/var/tmp/spdk.tgt.sock 00:25:43.690 18:23:54 ftl.ftl_restore -- ftl/common.sh@22 -- # export spdk_ini_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json 00:25:43.690 18:23:54 ftl.ftl_restore -- ftl/common.sh@22 -- # spdk_ini_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json 00:25:43.690 18:23:54 ftl.ftl_restore -- ftl/common.sh@23 -- # export spdk_ini_pid= 00:25:43.690 18:23:54 ftl.ftl_restore -- ftl/common.sh@23 -- # spdk_ini_pid= 00:25:43.690 18:23:54 ftl.ftl_restore -- ftl/common.sh@25 -- # export spdk_dd_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:25:43.690 18:23:54 ftl.ftl_restore -- ftl/common.sh@25 -- # spdk_dd_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:25:43.690 18:23:54 ftl.ftl_restore -- ftl/restore.sh@11 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:25:43.690 18:23:54 ftl.ftl_restore -- ftl/restore.sh@13 -- # mktemp -d 00:25:43.690 18:23:54 ftl.ftl_restore -- ftl/restore.sh@13 -- # mount_dir=/tmp/tmp.c7rxlF2uda 00:25:43.690 18:23:54 ftl.ftl_restore -- ftl/restore.sh@15 -- # getopts :u:c:f opt 00:25:43.690 18:23:54 ftl.ftl_restore -- ftl/restore.sh@16 -- # case $opt in 00:25:43.691 18:23:54 ftl.ftl_restore -- ftl/restore.sh@18 -- # nv_cache=0000:00:10.0 00:25:43.691 18:23:54 ftl.ftl_restore -- ftl/restore.sh@15 -- # getopts :u:c:f opt 00:25:43.691 18:23:54 ftl.ftl_restore -- ftl/restore.sh@23 -- # shift 2 00:25:43.691 18:23:54 ftl.ftl_restore -- ftl/restore.sh@24 -- # device=0000:00:11.0 00:25:43.691 18:23:54 ftl.ftl_restore -- ftl/restore.sh@25 -- # timeout=240 00:25:43.691 18:23:54 ftl.ftl_restore -- ftl/restore.sh@36 -- # trap 'restore_kill; exit 1' SIGINT SIGTERM EXIT 00:25:43.691 18:23:54 ftl.ftl_restore -- ftl/restore.sh@39 -- # svcpid=79007 00:25:43.691 18:23:54 ftl.ftl_restore -- ftl/restore.sh@38 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:25:43.691 18:23:54 ftl.ftl_restore -- ftl/restore.sh@41 -- # waitforlisten 79007 00:25:43.691 18:23:54 ftl.ftl_restore -- common/autotest_common.sh@835 -- # '[' -z 79007 ']' 00:25:43.691 18:23:54 ftl.ftl_restore -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:25:43.691 18:23:54 ftl.ftl_restore -- common/autotest_common.sh@840 -- # local max_retries=100 00:25:43.691 18:23:54 ftl.ftl_restore -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:25:43.691 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:25:43.691 18:23:54 ftl.ftl_restore -- common/autotest_common.sh@844 -- # xtrace_disable 00:25:43.691 18:23:54 ftl.ftl_restore -- common/autotest_common.sh@10 -- # set +x 00:25:43.950 [2024-12-06 18:23:54.349973] Starting SPDK v25.01-pre git sha1 1148849d6 / DPDK 24.03.0 initialization... 00:25:43.950 [2024-12-06 18:23:54.350312] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid79007 ] 00:25:44.210 [2024-12-06 18:23:54.531030] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:25:44.210 [2024-12-06 18:23:54.648560] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:25:45.147 18:23:55 ftl.ftl_restore -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:25:45.147 18:23:55 ftl.ftl_restore -- common/autotest_common.sh@868 -- # return 0 00:25:45.147 18:23:55 ftl.ftl_restore -- ftl/restore.sh@43 -- # create_base_bdev nvme0 0000:00:11.0 103424 00:25:45.147 18:23:55 ftl.ftl_restore -- ftl/common.sh@54 -- # local name=nvme0 00:25:45.147 18:23:55 ftl.ftl_restore -- ftl/common.sh@55 -- # local base_bdf=0000:00:11.0 00:25:45.147 18:23:55 ftl.ftl_restore -- ftl/common.sh@56 -- # local size=103424 00:25:45.147 18:23:55 ftl.ftl_restore -- ftl/common.sh@59 -- # local base_bdev 00:25:45.147 18:23:55 ftl.ftl_restore -- ftl/common.sh@60 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_nvme_attach_controller -b nvme0 -t PCIe -a 0000:00:11.0 00:25:45.406 18:23:55 ftl.ftl_restore -- ftl/common.sh@60 -- # base_bdev=nvme0n1 00:25:45.406 18:23:55 ftl.ftl_restore -- ftl/common.sh@62 -- # local base_size 00:25:45.406 18:23:55 ftl.ftl_restore -- ftl/common.sh@63 -- # get_bdev_size nvme0n1 00:25:45.406 18:23:55 ftl.ftl_restore -- common/autotest_common.sh@1382 -- # local bdev_name=nvme0n1 00:25:45.406 18:23:55 ftl.ftl_restore -- common/autotest_common.sh@1383 -- # local bdev_info 00:25:45.406 18:23:55 ftl.ftl_restore -- common/autotest_common.sh@1384 -- # local bs 00:25:45.406 18:23:55 ftl.ftl_restore -- common/autotest_common.sh@1385 -- # local nb 00:25:45.406 18:23:55 ftl.ftl_restore -- common/autotest_common.sh@1386 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b nvme0n1 00:25:45.666 18:23:56 ftl.ftl_restore -- common/autotest_common.sh@1386 -- # bdev_info='[ 00:25:45.666 { 00:25:45.666 "name": "nvme0n1", 00:25:45.666 "aliases": [ 00:25:45.666 "5bdff3df-78d9-4292-a7b9-28b0435b32e7" 00:25:45.666 ], 00:25:45.666 "product_name": "NVMe disk", 00:25:45.666 "block_size": 4096, 00:25:45.666 "num_blocks": 1310720, 00:25:45.666 "uuid": "5bdff3df-78d9-4292-a7b9-28b0435b32e7", 00:25:45.666 "numa_id": -1, 00:25:45.666 "assigned_rate_limits": { 00:25:45.666 "rw_ios_per_sec": 0, 00:25:45.666 "rw_mbytes_per_sec": 0, 00:25:45.666 "r_mbytes_per_sec": 0, 00:25:45.666 "w_mbytes_per_sec": 0 00:25:45.666 }, 00:25:45.666 "claimed": true, 00:25:45.666 "claim_type": "read_many_write_one", 00:25:45.666 "zoned": false, 00:25:45.666 "supported_io_types": { 00:25:45.666 "read": true, 00:25:45.666 "write": true, 00:25:45.666 "unmap": true, 00:25:45.666 "flush": true, 00:25:45.666 "reset": true, 00:25:45.666 "nvme_admin": true, 00:25:45.666 "nvme_io": true, 00:25:45.666 "nvme_io_md": false, 00:25:45.666 "write_zeroes": true, 00:25:45.666 "zcopy": false, 00:25:45.666 "get_zone_info": false, 00:25:45.666 "zone_management": false, 00:25:45.666 "zone_append": false, 00:25:45.666 "compare": true, 00:25:45.666 "compare_and_write": false, 00:25:45.666 "abort": true, 00:25:45.666 "seek_hole": false, 00:25:45.666 "seek_data": false, 00:25:45.666 "copy": true, 00:25:45.666 "nvme_iov_md": false 00:25:45.666 }, 00:25:45.666 "driver_specific": { 00:25:45.666 "nvme": [ 00:25:45.666 { 00:25:45.666 "pci_address": "0000:00:11.0", 00:25:45.666 "trid": { 00:25:45.666 "trtype": "PCIe", 00:25:45.666 "traddr": "0000:00:11.0" 00:25:45.666 }, 00:25:45.666 "ctrlr_data": { 00:25:45.666 "cntlid": 0, 00:25:45.666 "vendor_id": "0x1b36", 00:25:45.666 "model_number": "QEMU NVMe Ctrl", 00:25:45.666 "serial_number": "12341", 00:25:45.666 "firmware_revision": "8.0.0", 00:25:45.666 "subnqn": "nqn.2019-08.org.qemu:12341", 00:25:45.666 "oacs": { 00:25:45.666 "security": 0, 00:25:45.666 "format": 1, 00:25:45.666 "firmware": 0, 00:25:45.666 "ns_manage": 1 00:25:45.666 }, 00:25:45.666 "multi_ctrlr": false, 00:25:45.666 "ana_reporting": false 00:25:45.666 }, 00:25:45.666 "vs": { 00:25:45.666 "nvme_version": "1.4" 00:25:45.666 }, 00:25:45.666 "ns_data": { 00:25:45.666 "id": 1, 00:25:45.666 "can_share": false 00:25:45.666 } 00:25:45.666 } 00:25:45.666 ], 00:25:45.666 "mp_policy": "active_passive" 00:25:45.666 } 00:25:45.666 } 00:25:45.666 ]' 00:25:45.666 18:23:56 ftl.ftl_restore -- common/autotest_common.sh@1387 -- # jq '.[] .block_size' 00:25:45.666 18:23:56 ftl.ftl_restore -- common/autotest_common.sh@1387 -- # bs=4096 00:25:45.666 18:23:56 ftl.ftl_restore -- common/autotest_common.sh@1388 -- # jq '.[] .num_blocks' 00:25:45.666 18:23:56 ftl.ftl_restore -- common/autotest_common.sh@1388 -- # nb=1310720 00:25:45.666 18:23:56 ftl.ftl_restore -- common/autotest_common.sh@1391 -- # bdev_size=5120 00:25:45.666 18:23:56 ftl.ftl_restore -- common/autotest_common.sh@1392 -- # echo 5120 00:25:45.666 18:23:56 ftl.ftl_restore -- ftl/common.sh@63 -- # base_size=5120 00:25:45.666 18:23:56 ftl.ftl_restore -- ftl/common.sh@64 -- # [[ 103424 -le 5120 ]] 00:25:45.666 18:23:56 ftl.ftl_restore -- ftl/common.sh@67 -- # clear_lvols 00:25:45.666 18:23:56 ftl.ftl_restore -- ftl/common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores 00:25:45.666 18:23:56 ftl.ftl_restore -- ftl/common.sh@28 -- # jq -r '.[] | .uuid' 00:25:45.925 18:23:56 ftl.ftl_restore -- ftl/common.sh@28 -- # stores=1468977f-4ea3-470d-b9c6-705b1fa7502d 00:25:45.925 18:23:56 ftl.ftl_restore -- ftl/common.sh@29 -- # for lvs in $stores 00:25:45.925 18:23:56 ftl.ftl_restore -- ftl/common.sh@30 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -u 1468977f-4ea3-470d-b9c6-705b1fa7502d 00:25:46.185 18:23:56 ftl.ftl_restore -- ftl/common.sh@68 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create_lvstore nvme0n1 lvs 00:25:46.444 18:23:56 ftl.ftl_restore -- ftl/common.sh@68 -- # lvs=7809b67a-7baf-4fcc-82bd-336c5bf2c14d 00:25:46.444 18:23:56 ftl.ftl_restore -- ftl/common.sh@69 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create nvme0n1p0 103424 -t -u 7809b67a-7baf-4fcc-82bd-336c5bf2c14d 00:25:46.702 18:23:57 ftl.ftl_restore -- ftl/restore.sh@43 -- # split_bdev=911deffb-2d68-429f-936c-eecd9ccf4bc2 00:25:46.702 18:23:57 ftl.ftl_restore -- ftl/restore.sh@44 -- # '[' -n 0000:00:10.0 ']' 00:25:46.702 18:23:57 ftl.ftl_restore -- ftl/restore.sh@45 -- # create_nv_cache_bdev nvc0 0000:00:10.0 911deffb-2d68-429f-936c-eecd9ccf4bc2 00:25:46.702 18:23:57 ftl.ftl_restore -- ftl/common.sh@35 -- # local name=nvc0 00:25:46.702 18:23:57 ftl.ftl_restore -- ftl/common.sh@36 -- # local cache_bdf=0000:00:10.0 00:25:46.702 18:23:57 ftl.ftl_restore -- ftl/common.sh@37 -- # local base_bdev=911deffb-2d68-429f-936c-eecd9ccf4bc2 00:25:46.702 18:23:57 ftl.ftl_restore -- ftl/common.sh@38 -- # local cache_size= 00:25:46.702 18:23:57 ftl.ftl_restore -- ftl/common.sh@41 -- # get_bdev_size 911deffb-2d68-429f-936c-eecd9ccf4bc2 00:25:46.702 18:23:57 ftl.ftl_restore -- common/autotest_common.sh@1382 -- # local bdev_name=911deffb-2d68-429f-936c-eecd9ccf4bc2 00:25:46.702 18:23:57 ftl.ftl_restore -- common/autotest_common.sh@1383 -- # local bdev_info 00:25:46.702 18:23:57 ftl.ftl_restore -- common/autotest_common.sh@1384 -- # local bs 00:25:46.702 18:23:57 ftl.ftl_restore -- common/autotest_common.sh@1385 -- # local nb 00:25:46.702 18:23:57 ftl.ftl_restore -- common/autotest_common.sh@1386 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b 911deffb-2d68-429f-936c-eecd9ccf4bc2 00:25:46.702 18:23:57 ftl.ftl_restore -- common/autotest_common.sh@1386 -- # bdev_info='[ 00:25:46.702 { 00:25:46.702 "name": "911deffb-2d68-429f-936c-eecd9ccf4bc2", 00:25:46.702 "aliases": [ 00:25:46.702 "lvs/nvme0n1p0" 00:25:46.702 ], 00:25:46.702 "product_name": "Logical Volume", 00:25:46.702 "block_size": 4096, 00:25:46.702 "num_blocks": 26476544, 00:25:46.702 "uuid": "911deffb-2d68-429f-936c-eecd9ccf4bc2", 00:25:46.702 "assigned_rate_limits": { 00:25:46.702 "rw_ios_per_sec": 0, 00:25:46.702 "rw_mbytes_per_sec": 0, 00:25:46.702 "r_mbytes_per_sec": 0, 00:25:46.702 "w_mbytes_per_sec": 0 00:25:46.702 }, 00:25:46.702 "claimed": false, 00:25:46.702 "zoned": false, 00:25:46.702 "supported_io_types": { 00:25:46.702 "read": true, 00:25:46.702 "write": true, 00:25:46.702 "unmap": true, 00:25:46.702 "flush": false, 00:25:46.702 "reset": true, 00:25:46.702 "nvme_admin": false, 00:25:46.702 "nvme_io": false, 00:25:46.702 "nvme_io_md": false, 00:25:46.702 "write_zeroes": true, 00:25:46.702 "zcopy": false, 00:25:46.702 "get_zone_info": false, 00:25:46.702 "zone_management": false, 00:25:46.702 "zone_append": false, 00:25:46.702 "compare": false, 00:25:46.702 "compare_and_write": false, 00:25:46.702 "abort": false, 00:25:46.702 "seek_hole": true, 00:25:46.702 "seek_data": true, 00:25:46.702 "copy": false, 00:25:46.702 "nvme_iov_md": false 00:25:46.702 }, 00:25:46.702 "driver_specific": { 00:25:46.702 "lvol": { 00:25:46.702 "lvol_store_uuid": "7809b67a-7baf-4fcc-82bd-336c5bf2c14d", 00:25:46.702 "base_bdev": "nvme0n1", 00:25:46.702 "thin_provision": true, 00:25:46.702 "num_allocated_clusters": 0, 00:25:46.702 "snapshot": false, 00:25:46.702 "clone": false, 00:25:46.703 "esnap_clone": false 00:25:46.703 } 00:25:46.703 } 00:25:46.703 } 00:25:46.703 ]' 00:25:46.703 18:23:57 ftl.ftl_restore -- common/autotest_common.sh@1387 -- # jq '.[] .block_size' 00:25:46.960 18:23:57 ftl.ftl_restore -- common/autotest_common.sh@1387 -- # bs=4096 00:25:46.960 18:23:57 ftl.ftl_restore -- common/autotest_common.sh@1388 -- # jq '.[] .num_blocks' 00:25:46.960 18:23:57 ftl.ftl_restore -- common/autotest_common.sh@1388 -- # nb=26476544 00:25:46.960 18:23:57 ftl.ftl_restore -- common/autotest_common.sh@1391 -- # bdev_size=103424 00:25:46.960 18:23:57 ftl.ftl_restore -- common/autotest_common.sh@1392 -- # echo 103424 00:25:46.960 18:23:57 ftl.ftl_restore -- ftl/common.sh@41 -- # local base_size=5171 00:25:46.960 18:23:57 ftl.ftl_restore -- ftl/common.sh@44 -- # local nvc_bdev 00:25:46.961 18:23:57 ftl.ftl_restore -- ftl/common.sh@45 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_nvme_attach_controller -b nvc0 -t PCIe -a 0000:00:10.0 00:25:47.219 18:23:57 ftl.ftl_restore -- ftl/common.sh@45 -- # nvc_bdev=nvc0n1 00:25:47.219 18:23:57 ftl.ftl_restore -- ftl/common.sh@47 -- # [[ -z '' ]] 00:25:47.219 18:23:57 ftl.ftl_restore -- ftl/common.sh@48 -- # get_bdev_size 911deffb-2d68-429f-936c-eecd9ccf4bc2 00:25:47.219 18:23:57 ftl.ftl_restore -- common/autotest_common.sh@1382 -- # local bdev_name=911deffb-2d68-429f-936c-eecd9ccf4bc2 00:25:47.219 18:23:57 ftl.ftl_restore -- common/autotest_common.sh@1383 -- # local bdev_info 00:25:47.219 18:23:57 ftl.ftl_restore -- common/autotest_common.sh@1384 -- # local bs 00:25:47.219 18:23:57 ftl.ftl_restore -- common/autotest_common.sh@1385 -- # local nb 00:25:47.219 18:23:57 ftl.ftl_restore -- common/autotest_common.sh@1386 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b 911deffb-2d68-429f-936c-eecd9ccf4bc2 00:25:47.219 18:23:57 ftl.ftl_restore -- common/autotest_common.sh@1386 -- # bdev_info='[ 00:25:47.219 { 00:25:47.219 "name": "911deffb-2d68-429f-936c-eecd9ccf4bc2", 00:25:47.219 "aliases": [ 00:25:47.219 "lvs/nvme0n1p0" 00:25:47.219 ], 00:25:47.219 "product_name": "Logical Volume", 00:25:47.219 "block_size": 4096, 00:25:47.219 "num_blocks": 26476544, 00:25:47.219 "uuid": "911deffb-2d68-429f-936c-eecd9ccf4bc2", 00:25:47.219 "assigned_rate_limits": { 00:25:47.219 "rw_ios_per_sec": 0, 00:25:47.219 "rw_mbytes_per_sec": 0, 00:25:47.219 "r_mbytes_per_sec": 0, 00:25:47.219 "w_mbytes_per_sec": 0 00:25:47.219 }, 00:25:47.219 "claimed": false, 00:25:47.219 "zoned": false, 00:25:47.219 "supported_io_types": { 00:25:47.219 "read": true, 00:25:47.219 "write": true, 00:25:47.219 "unmap": true, 00:25:47.219 "flush": false, 00:25:47.219 "reset": true, 00:25:47.219 "nvme_admin": false, 00:25:47.219 "nvme_io": false, 00:25:47.219 "nvme_io_md": false, 00:25:47.219 "write_zeroes": true, 00:25:47.219 "zcopy": false, 00:25:47.219 "get_zone_info": false, 00:25:47.219 "zone_management": false, 00:25:47.219 "zone_append": false, 00:25:47.219 "compare": false, 00:25:47.219 "compare_and_write": false, 00:25:47.219 "abort": false, 00:25:47.219 "seek_hole": true, 00:25:47.219 "seek_data": true, 00:25:47.219 "copy": false, 00:25:47.219 "nvme_iov_md": false 00:25:47.219 }, 00:25:47.219 "driver_specific": { 00:25:47.219 "lvol": { 00:25:47.219 "lvol_store_uuid": "7809b67a-7baf-4fcc-82bd-336c5bf2c14d", 00:25:47.219 "base_bdev": "nvme0n1", 00:25:47.219 "thin_provision": true, 00:25:47.219 "num_allocated_clusters": 0, 00:25:47.219 "snapshot": false, 00:25:47.219 "clone": false, 00:25:47.219 "esnap_clone": false 00:25:47.219 } 00:25:47.219 } 00:25:47.219 } 00:25:47.219 ]' 00:25:47.219 18:23:57 ftl.ftl_restore -- common/autotest_common.sh@1387 -- # jq '.[] .block_size' 00:25:47.477 18:23:57 ftl.ftl_restore -- common/autotest_common.sh@1387 -- # bs=4096 00:25:47.477 18:23:57 ftl.ftl_restore -- common/autotest_common.sh@1388 -- # jq '.[] .num_blocks' 00:25:47.477 18:23:57 ftl.ftl_restore -- common/autotest_common.sh@1388 -- # nb=26476544 00:25:47.477 18:23:57 ftl.ftl_restore -- common/autotest_common.sh@1391 -- # bdev_size=103424 00:25:47.477 18:23:57 ftl.ftl_restore -- common/autotest_common.sh@1392 -- # echo 103424 00:25:47.477 18:23:57 ftl.ftl_restore -- ftl/common.sh@48 -- # cache_size=5171 00:25:47.477 18:23:57 ftl.ftl_restore -- ftl/common.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_split_create nvc0n1 -s 5171 1 00:25:47.734 18:23:58 ftl.ftl_restore -- ftl/restore.sh@45 -- # nvc_bdev=nvc0n1p0 00:25:47.734 18:23:58 ftl.ftl_restore -- ftl/restore.sh@48 -- # get_bdev_size 911deffb-2d68-429f-936c-eecd9ccf4bc2 00:25:47.734 18:23:58 ftl.ftl_restore -- common/autotest_common.sh@1382 -- # local bdev_name=911deffb-2d68-429f-936c-eecd9ccf4bc2 00:25:47.734 18:23:58 ftl.ftl_restore -- common/autotest_common.sh@1383 -- # local bdev_info 00:25:47.734 18:23:58 ftl.ftl_restore -- common/autotest_common.sh@1384 -- # local bs 00:25:47.734 18:23:58 ftl.ftl_restore -- common/autotest_common.sh@1385 -- # local nb 00:25:47.734 18:23:58 ftl.ftl_restore -- common/autotest_common.sh@1386 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b 911deffb-2d68-429f-936c-eecd9ccf4bc2 00:25:47.734 18:23:58 ftl.ftl_restore -- common/autotest_common.sh@1386 -- # bdev_info='[ 00:25:47.734 { 00:25:47.734 "name": "911deffb-2d68-429f-936c-eecd9ccf4bc2", 00:25:47.734 "aliases": [ 00:25:47.734 "lvs/nvme0n1p0" 00:25:47.734 ], 00:25:47.734 "product_name": "Logical Volume", 00:25:47.734 "block_size": 4096, 00:25:47.734 "num_blocks": 26476544, 00:25:47.734 "uuid": "911deffb-2d68-429f-936c-eecd9ccf4bc2", 00:25:47.734 "assigned_rate_limits": { 00:25:47.734 "rw_ios_per_sec": 0, 00:25:47.734 "rw_mbytes_per_sec": 0, 00:25:47.734 "r_mbytes_per_sec": 0, 00:25:47.734 "w_mbytes_per_sec": 0 00:25:47.734 }, 00:25:47.734 "claimed": false, 00:25:47.734 "zoned": false, 00:25:47.734 "supported_io_types": { 00:25:47.734 "read": true, 00:25:47.734 "write": true, 00:25:47.734 "unmap": true, 00:25:47.734 "flush": false, 00:25:47.734 "reset": true, 00:25:47.734 "nvme_admin": false, 00:25:47.734 "nvme_io": false, 00:25:47.734 "nvme_io_md": false, 00:25:47.734 "write_zeroes": true, 00:25:47.734 "zcopy": false, 00:25:47.734 "get_zone_info": false, 00:25:47.734 "zone_management": false, 00:25:47.734 "zone_append": false, 00:25:47.734 "compare": false, 00:25:47.734 "compare_and_write": false, 00:25:47.734 "abort": false, 00:25:47.734 "seek_hole": true, 00:25:47.734 "seek_data": true, 00:25:47.734 "copy": false, 00:25:47.734 "nvme_iov_md": false 00:25:47.734 }, 00:25:47.734 "driver_specific": { 00:25:47.734 "lvol": { 00:25:47.734 "lvol_store_uuid": "7809b67a-7baf-4fcc-82bd-336c5bf2c14d", 00:25:47.734 "base_bdev": "nvme0n1", 00:25:47.734 "thin_provision": true, 00:25:47.734 "num_allocated_clusters": 0, 00:25:47.734 "snapshot": false, 00:25:47.734 "clone": false, 00:25:47.734 "esnap_clone": false 00:25:47.734 } 00:25:47.734 } 00:25:47.734 } 00:25:47.734 ]' 00:25:48.104 18:23:58 ftl.ftl_restore -- common/autotest_common.sh@1387 -- # jq '.[] .block_size' 00:25:48.104 18:23:58 ftl.ftl_restore -- common/autotest_common.sh@1387 -- # bs=4096 00:25:48.104 18:23:58 ftl.ftl_restore -- common/autotest_common.sh@1388 -- # jq '.[] .num_blocks' 00:25:48.104 18:23:58 ftl.ftl_restore -- common/autotest_common.sh@1388 -- # nb=26476544 00:25:48.104 18:23:58 ftl.ftl_restore -- common/autotest_common.sh@1391 -- # bdev_size=103424 00:25:48.104 18:23:58 ftl.ftl_restore -- common/autotest_common.sh@1392 -- # echo 103424 00:25:48.104 18:23:58 ftl.ftl_restore -- ftl/restore.sh@48 -- # l2p_dram_size_mb=10 00:25:48.104 18:23:58 ftl.ftl_restore -- ftl/restore.sh@49 -- # ftl_construct_args='bdev_ftl_create -b ftl0 -d 911deffb-2d68-429f-936c-eecd9ccf4bc2 --l2p_dram_limit 10' 00:25:48.105 18:23:58 ftl.ftl_restore -- ftl/restore.sh@51 -- # '[' -n '' ']' 00:25:48.105 18:23:58 ftl.ftl_restore -- ftl/restore.sh@52 -- # '[' -n 0000:00:10.0 ']' 00:25:48.105 18:23:58 ftl.ftl_restore -- ftl/restore.sh@52 -- # ftl_construct_args+=' -c nvc0n1p0' 00:25:48.105 18:23:58 ftl.ftl_restore -- ftl/restore.sh@54 -- # '[' '' -eq 1 ']' 00:25:48.105 /home/vagrant/spdk_repo/spdk/test/ftl/restore.sh: line 54: [: : integer expression expected 00:25:48.105 18:23:58 ftl.ftl_restore -- ftl/restore.sh@58 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -t 240 bdev_ftl_create -b ftl0 -d 911deffb-2d68-429f-936c-eecd9ccf4bc2 --l2p_dram_limit 10 -c nvc0n1p0 00:25:48.105 [2024-12-06 18:23:58.578571] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:48.105 [2024-12-06 18:23:58.578631] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Check configuration 00:25:48.105 [2024-12-06 18:23:58.578651] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.006 ms 00:25:48.105 [2024-12-06 18:23:58.578662] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:48.105 [2024-12-06 18:23:58.578731] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:48.105 [2024-12-06 18:23:58.578744] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:25:48.105 [2024-12-06 18:23:58.578757] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.045 ms 00:25:48.105 [2024-12-06 18:23:58.578767] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:48.105 [2024-12-06 18:23:58.578798] mngt/ftl_mngt_bdev.c: 196:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using nvc0n1p0 as write buffer cache 00:25:48.105 [2024-12-06 18:23:58.579827] mngt/ftl_mngt_bdev.c: 236:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using bdev as NV Cache device 00:25:48.105 [2024-12-06 18:23:58.579869] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:48.105 [2024-12-06 18:23:58.579881] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:25:48.105 [2024-12-06 18:23:58.579896] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.080 ms 00:25:48.105 [2024-12-06 18:23:58.579906] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:48.105 [2024-12-06 18:23:58.579987] mngt/ftl_mngt_md.c: 570:ftl_mngt_superblock_init: *NOTICE*: [FTL][ftl0] Create new FTL, UUID 246e93d4-98de-4402-8d08-9ff86994df11 00:25:48.105 [2024-12-06 18:23:58.581386] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:48.105 [2024-12-06 18:23:58.581544] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Default-initialize superblock 00:25:48.105 [2024-12-06 18:23:58.581565] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.016 ms 00:25:48.105 [2024-12-06 18:23:58.581578] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:48.105 [2024-12-06 18:23:58.588962] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:48.105 [2024-12-06 18:23:58.589119] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:25:48.105 [2024-12-06 18:23:58.589141] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 7.332 ms 00:25:48.105 [2024-12-06 18:23:58.589154] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:48.105 [2024-12-06 18:23:58.589261] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:48.105 [2024-12-06 18:23:58.589313] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:25:48.105 [2024-12-06 18:23:58.589325] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.078 ms 00:25:48.105 [2024-12-06 18:23:58.589342] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:48.105 [2024-12-06 18:23:58.589406] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:48.105 [2024-12-06 18:23:58.589422] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Register IO device 00:25:48.105 [2024-12-06 18:23:58.589435] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.008 ms 00:25:48.105 [2024-12-06 18:23:58.589448] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:48.105 [2024-12-06 18:23:58.589474] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:25:48.105 [2024-12-06 18:23:58.595005] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:48.105 [2024-12-06 18:23:58.595133] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:25:48.105 [2024-12-06 18:23:58.595284] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 5.544 ms 00:25:48.105 [2024-12-06 18:23:58.595327] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:48.105 [2024-12-06 18:23:58.595392] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:48.105 [2024-12-06 18:23:58.595432] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Decorate bands 00:25:48.105 [2024-12-06 18:23:58.595525] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.010 ms 00:25:48.105 [2024-12-06 18:23:58.595562] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:48.105 [2024-12-06 18:23:58.595637] ftl_layout.c: 613:ftl_layout_setup: *NOTICE*: [FTL][ftl0] FTL layout setup mode 1 00:25:48.105 [2024-12-06 18:23:58.595799] upgrade/ftl_sb_v5.c: 92:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob store 0x150 bytes 00:25:48.105 [2024-12-06 18:23:58.595896] upgrade/ftl_sb_v5.c: 101:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] base layout blob store 0x48 bytes 00:25:48.105 [2024-12-06 18:23:58.596009] upgrade/ftl_sb_v5.c: 109:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] layout blob store 0x190 bytes 00:25:48.105 [2024-12-06 18:23:58.596075] ftl_layout.c: 685:ftl_layout_setup: *NOTICE*: [FTL][ftl0] Base device capacity: 103424.00 MiB 00:25:48.105 [2024-12-06 18:23:58.596130] ftl_layout.c: 687:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache device capacity: 5171.00 MiB 00:25:48.105 [2024-12-06 18:23:58.596331] ftl_layout.c: 689:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P entries: 20971520 00:25:48.105 [2024-12-06 18:23:58.596366] ftl_layout.c: 690:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P address size: 4 00:25:48.105 [2024-12-06 18:23:58.596403] ftl_layout.c: 691:ftl_layout_setup: *NOTICE*: [FTL][ftl0] P2L checkpoint pages: 2048 00:25:48.105 [2024-12-06 18:23:58.596433] ftl_layout.c: 692:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache chunk count 5 00:25:48.105 [2024-12-06 18:23:58.596467] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:48.105 [2024-12-06 18:23:58.596508] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize layout 00:25:48.105 [2024-12-06 18:23:58.596602] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.834 ms 00:25:48.105 [2024-12-06 18:23:58.596644] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:48.105 [2024-12-06 18:23:58.596751] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:48.105 [2024-12-06 18:23:58.596835] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Verify layout 00:25:48.105 [2024-12-06 18:23:58.596876] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.057 ms 00:25:48.105 [2024-12-06 18:23:58.596907] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:48.105 [2024-12-06 18:23:58.597070] ftl_layout.c: 768:ftl_layout_dump: *NOTICE*: [FTL][ftl0] NV cache layout: 00:25:48.105 [2024-12-06 18:23:58.597174] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb 00:25:48.105 [2024-12-06 18:23:58.597217] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:25:48.105 [2024-12-06 18:23:58.597291] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:25:48.105 [2024-12-06 18:23:58.597333] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region l2p 00:25:48.105 [2024-12-06 18:23:58.597428] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.12 MiB 00:25:48.105 [2024-12-06 18:23:58.597501] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 80.00 MiB 00:25:48.105 [2024-12-06 18:23:58.597532] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md 00:25:48.105 [2024-12-06 18:23:58.597565] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.12 MiB 00:25:48.105 [2024-12-06 18:23:58.597594] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:25:48.105 [2024-12-06 18:23:58.597628] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md_mirror 00:25:48.105 [2024-12-06 18:23:58.597657] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.62 MiB 00:25:48.105 [2024-12-06 18:23:58.597741] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:25:48.105 [2024-12-06 18:23:58.597775] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md 00:25:48.105 [2024-12-06 18:23:58.597808] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.88 MiB 00:25:48.105 [2024-12-06 18:23:58.597837] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:25:48.105 [2024-12-06 18:23:58.597871] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md_mirror 00:25:48.105 [2024-12-06 18:23:58.597900] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 114.00 MiB 00:25:48.105 [2024-12-06 18:23:58.597983] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:25:48.105 [2024-12-06 18:23:58.598020] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l0 00:25:48.105 [2024-12-06 18:23:58.598052] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 81.12 MiB 00:25:48.105 [2024-12-06 18:23:58.598083] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:25:48.105 [2024-12-06 18:23:58.598115] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l1 00:25:48.105 [2024-12-06 18:23:58.598144] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 89.12 MiB 00:25:48.105 [2024-12-06 18:23:58.598240] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:25:48.105 [2024-12-06 18:23:58.598281] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l2 00:25:48.105 [2024-12-06 18:23:58.598317] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 97.12 MiB 00:25:48.105 [2024-12-06 18:23:58.598346] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:25:48.105 [2024-12-06 18:23:58.598434] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l3 00:25:48.105 [2024-12-06 18:23:58.598471] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 105.12 MiB 00:25:48.105 [2024-12-06 18:23:58.598504] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:25:48.105 [2024-12-06 18:23:58.598533] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md 00:25:48.105 [2024-12-06 18:23:58.598637] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.12 MiB 00:25:48.105 [2024-12-06 18:23:58.598703] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:25:48.105 [2024-12-06 18:23:58.598734] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md_mirror 00:25:48.105 [2024-12-06 18:23:58.598763] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.38 MiB 00:25:48.105 [2024-12-06 18:23:58.598797] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:25:48.105 [2024-12-06 18:23:58.598826] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log 00:25:48.105 [2024-12-06 18:23:58.598858] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.62 MiB 00:25:48.105 [2024-12-06 18:23:58.598887] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:25:48.105 [2024-12-06 18:23:58.599049] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log_mirror 00:25:48.105 [2024-12-06 18:23:58.599085] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.75 MiB 00:25:48.105 [2024-12-06 18:23:58.599117] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:25:48.105 [2024-12-06 18:23:58.599146] ftl_layout.c: 775:ftl_layout_dump: *NOTICE*: [FTL][ftl0] Base device layout: 00:25:48.105 [2024-12-06 18:23:58.599179] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb_mirror 00:25:48.105 [2024-12-06 18:23:58.599257] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:25:48.105 [2024-12-06 18:23:58.599310] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:25:48.105 [2024-12-06 18:23:58.599341] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region vmap 00:25:48.105 [2024-12-06 18:23:58.599376] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 102400.25 MiB 00:25:48.105 [2024-12-06 18:23:58.599407] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 3.38 MiB 00:25:48.105 [2024-12-06 18:23:58.599484] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region data_btm 00:25:48.105 [2024-12-06 18:23:58.599518] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.25 MiB 00:25:48.105 [2024-12-06 18:23:58.599588] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 102400.00 MiB 00:25:48.105 [2024-12-06 18:23:58.599716] upgrade/ftl_sb_v5.c: 408:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - nvc: 00:25:48.105 [2024-12-06 18:23:58.599784] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x0 ver:5 blk_offs:0x0 blk_sz:0x20 00:25:48.105 [2024-12-06 18:23:58.599838] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x2 ver:0 blk_offs:0x20 blk_sz:0x5000 00:25:48.105 [2024-12-06 18:23:58.599854] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x3 ver:2 blk_offs:0x5020 blk_sz:0x80 00:25:48.105 [2024-12-06 18:23:58.599864] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x4 ver:2 blk_offs:0x50a0 blk_sz:0x80 00:25:48.105 [2024-12-06 18:23:58.599878] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xa ver:2 blk_offs:0x5120 blk_sz:0x800 00:25:48.105 [2024-12-06 18:23:58.599888] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xb ver:2 blk_offs:0x5920 blk_sz:0x800 00:25:48.105 [2024-12-06 18:23:58.599901] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xc ver:2 blk_offs:0x6120 blk_sz:0x800 00:25:48.105 [2024-12-06 18:23:58.599911] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xd ver:2 blk_offs:0x6920 blk_sz:0x800 00:25:48.105 [2024-12-06 18:23:58.599926] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xe ver:0 blk_offs:0x7120 blk_sz:0x40 00:25:48.105 [2024-12-06 18:23:58.599936] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xf ver:0 blk_offs:0x7160 blk_sz:0x40 00:25:48.105 [2024-12-06 18:23:58.599952] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x10 ver:1 blk_offs:0x71a0 blk_sz:0x20 00:25:48.105 [2024-12-06 18:23:58.599962] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x11 ver:1 blk_offs:0x71c0 blk_sz:0x20 00:25:48.105 [2024-12-06 18:23:58.599974] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x6 ver:2 blk_offs:0x71e0 blk_sz:0x20 00:25:48.106 [2024-12-06 18:23:58.599984] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x7 ver:2 blk_offs:0x7200 blk_sz:0x20 00:25:48.106 [2024-12-06 18:23:58.599997] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x7220 blk_sz:0x13c0e0 00:25:48.106 [2024-12-06 18:23:58.600008] upgrade/ftl_sb_v5.c: 422:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - base dev: 00:25:48.106 [2024-12-06 18:23:58.600022] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x1 ver:5 blk_offs:0x0 blk_sz:0x20 00:25:48.106 [2024-12-06 18:23:58.600033] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x20 blk_sz:0x20 00:25:48.106 [2024-12-06 18:23:58.600046] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x9 ver:0 blk_offs:0x40 blk_sz:0x1900000 00:25:48.106 [2024-12-06 18:23:58.600056] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x5 ver:0 blk_offs:0x1900040 blk_sz:0x360 00:25:48.106 [2024-12-06 18:23:58.600069] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x19003a0 blk_sz:0x3fc60 00:25:48.106 [2024-12-06 18:23:58.600081] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:48.106 [2024-12-06 18:23:58.600094] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Layout upgrade 00:25:48.106 [2024-12-06 18:23:58.600105] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.076 ms 00:25:48.106 [2024-12-06 18:23:58.600118] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:48.106 [2024-12-06 18:23:58.600192] mngt/ftl_mngt_misc.c: 165:ftl_mngt_scrub_nv_cache: *NOTICE*: [FTL][ftl0] NV cache data region needs scrubbing, this may take a while. 00:25:48.106 [2024-12-06 18:23:58.600212] mngt/ftl_mngt_misc.c: 166:ftl_mngt_scrub_nv_cache: *NOTICE*: [FTL][ftl0] Scrubbing 5 chunks 00:25:51.389 [2024-12-06 18:24:01.959864] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:51.389 [2024-12-06 18:24:01.959933] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Scrub NV cache 00:25:51.389 [2024-12-06 18:24:01.959950] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3365.125 ms 00:25:51.389 [2024-12-06 18:24:01.959964] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:51.648 [2024-12-06 18:24:02.001218] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:51.648 [2024-12-06 18:24:02.001293] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:25:51.648 [2024-12-06 18:24:02.001311] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 41.026 ms 00:25:51.648 [2024-12-06 18:24:02.001325] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:51.648 [2024-12-06 18:24:02.001474] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:51.648 [2024-12-06 18:24:02.001491] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize band addresses 00:25:51.648 [2024-12-06 18:24:02.001502] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.055 ms 00:25:51.648 [2024-12-06 18:24:02.001522] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:51.648 [2024-12-06 18:24:02.052034] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:51.648 [2024-12-06 18:24:02.052101] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:25:51.648 [2024-12-06 18:24:02.052118] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 50.548 ms 00:25:51.648 [2024-12-06 18:24:02.052131] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:51.648 [2024-12-06 18:24:02.052187] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:51.648 [2024-12-06 18:24:02.052207] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:25:51.648 [2024-12-06 18:24:02.052218] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.003 ms 00:25:51.648 [2024-12-06 18:24:02.052242] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:51.648 [2024-12-06 18:24:02.052775] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:51.648 [2024-12-06 18:24:02.052795] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:25:51.648 [2024-12-06 18:24:02.052806] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.421 ms 00:25:51.648 [2024-12-06 18:24:02.052819] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:51.648 [2024-12-06 18:24:02.052923] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:51.648 [2024-12-06 18:24:02.052937] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:25:51.648 [2024-12-06 18:24:02.052951] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.082 ms 00:25:51.648 [2024-12-06 18:24:02.052967] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:51.648 [2024-12-06 18:24:02.074844] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:51.648 [2024-12-06 18:24:02.074905] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:25:51.648 [2024-12-06 18:24:02.074920] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 21.891 ms 00:25:51.648 [2024-12-06 18:24:02.074933] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:51.648 [2024-12-06 18:24:02.101986] ftl_l2p_cache.c: 458:ftl_l2p_cache_init: *NOTICE*: l2p maximum resident size is: 9 (of 10) MiB 00:25:51.648 [2024-12-06 18:24:02.105192] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:51.648 [2024-12-06 18:24:02.105233] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize L2P 00:25:51.648 [2024-12-06 18:24:02.105251] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 30.188 ms 00:25:51.648 [2024-12-06 18:24:02.105273] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:51.648 [2024-12-06 18:24:02.193386] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:51.648 [2024-12-06 18:24:02.193615] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Clear L2P 00:25:51.648 [2024-12-06 18:24:02.193647] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 88.206 ms 00:25:51.648 [2024-12-06 18:24:02.193659] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:51.648 [2024-12-06 18:24:02.193879] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:51.648 [2024-12-06 18:24:02.193896] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize band initialization 00:25:51.648 [2024-12-06 18:24:02.193913] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.143 ms 00:25:51.648 [2024-12-06 18:24:02.193923] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:51.907 [2024-12-06 18:24:02.231341] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:51.907 [2024-12-06 18:24:02.231383] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Save initial band info metadata 00:25:51.907 [2024-12-06 18:24:02.231402] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 37.420 ms 00:25:51.907 [2024-12-06 18:24:02.231413] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:51.908 [2024-12-06 18:24:02.268417] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:51.908 [2024-12-06 18:24:02.268612] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Save initial chunk info metadata 00:25:51.908 [2024-12-06 18:24:02.268643] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 37.012 ms 00:25:51.908 [2024-12-06 18:24:02.268654] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:51.908 [2024-12-06 18:24:02.269431] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:51.908 [2024-12-06 18:24:02.269453] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize P2L checkpointing 00:25:51.908 [2024-12-06 18:24:02.269468] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.734 ms 00:25:51.908 [2024-12-06 18:24:02.269481] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:51.908 [2024-12-06 18:24:02.369276] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:51.908 [2024-12-06 18:24:02.369342] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Wipe P2L region 00:25:51.908 [2024-12-06 18:24:02.369367] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 99.887 ms 00:25:51.908 [2024-12-06 18:24:02.369378] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:51.908 [2024-12-06 18:24:02.407302] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:51.908 [2024-12-06 18:24:02.407485] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Clear trim map 00:25:51.908 [2024-12-06 18:24:02.407513] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 37.895 ms 00:25:51.908 [2024-12-06 18:24:02.407524] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:51.908 [2024-12-06 18:24:02.443815] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:51.908 [2024-12-06 18:24:02.443865] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Clear trim log 00:25:51.908 [2024-12-06 18:24:02.443883] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 36.260 ms 00:25:51.908 [2024-12-06 18:24:02.443894] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:52.167 [2024-12-06 18:24:02.483704] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:52.167 [2024-12-06 18:24:02.483769] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL dirty state 00:25:52.167 [2024-12-06 18:24:02.483789] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 39.821 ms 00:25:52.167 [2024-12-06 18:24:02.483800] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:52.167 [2024-12-06 18:24:02.483877] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:52.167 [2024-12-06 18:24:02.483890] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Start core poller 00:25:52.167 [2024-12-06 18:24:02.483908] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.008 ms 00:25:52.167 [2024-12-06 18:24:02.483919] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:52.167 [2024-12-06 18:24:02.484043] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:52.167 [2024-12-06 18:24:02.484060] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize initialization 00:25:52.167 [2024-12-06 18:24:02.484073] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.034 ms 00:25:52.167 [2024-12-06 18:24:02.484083] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:52.167 [2024-12-06 18:24:02.485166] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL startup', duration = 3912.482 ms, result 0 00:25:52.167 { 00:25:52.167 "name": "ftl0", 00:25:52.167 "uuid": "246e93d4-98de-4402-8d08-9ff86994df11" 00:25:52.167 } 00:25:52.167 18:24:02 ftl.ftl_restore -- ftl/restore.sh@61 -- # echo '{"subsystems": [' 00:25:52.167 18:24:02 ftl.ftl_restore -- ftl/restore.sh@62 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py save_subsystem_config -n bdev 00:25:52.167 18:24:02 ftl.ftl_restore -- ftl/restore.sh@63 -- # echo ']}' 00:25:52.167 18:24:02 ftl.ftl_restore -- ftl/restore.sh@65 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_ftl_unload -b ftl0 00:25:52.427 [2024-12-06 18:24:02.923748] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:52.427 [2024-12-06 18:24:02.923815] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinit core IO channel 00:25:52.427 [2024-12-06 18:24:02.923831] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.003 ms 00:25:52.427 [2024-12-06 18:24:02.923845] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:52.427 [2024-12-06 18:24:02.923872] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:25:52.427 [2024-12-06 18:24:02.928135] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:52.427 [2024-12-06 18:24:02.928303] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Unregister IO device 00:25:52.428 [2024-12-06 18:24:02.928331] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.247 ms 00:25:52.428 [2024-12-06 18:24:02.928342] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:52.428 [2024-12-06 18:24:02.928601] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:52.428 [2024-12-06 18:24:02.928618] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Stop core poller 00:25:52.428 [2024-12-06 18:24:02.928631] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.214 ms 00:25:52.428 [2024-12-06 18:24:02.928642] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:52.428 [2024-12-06 18:24:02.931156] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:52.428 [2024-12-06 18:24:02.931180] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist L2P 00:25:52.428 [2024-12-06 18:24:02.931195] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.499 ms 00:25:52.428 [2024-12-06 18:24:02.931206] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:52.428 [2024-12-06 18:24:02.936286] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:52.428 [2024-12-06 18:24:02.936324] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finish L2P trims 00:25:52.428 [2024-12-06 18:24:02.936341] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 5.063 ms 00:25:52.428 [2024-12-06 18:24:02.936352] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:52.428 [2024-12-06 18:24:02.975702] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:52.428 [2024-12-06 18:24:02.975759] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist NV cache metadata 00:25:52.428 [2024-12-06 18:24:02.975778] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 39.330 ms 00:25:52.428 [2024-12-06 18:24:02.975789] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:52.428 [2024-12-06 18:24:02.998166] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:52.428 [2024-12-06 18:24:02.998211] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist valid map metadata 00:25:52.428 [2024-12-06 18:24:02.998228] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 22.353 ms 00:25:52.428 [2024-12-06 18:24:02.998239] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:52.428 [2024-12-06 18:24:02.998454] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:52.428 [2024-12-06 18:24:02.998470] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist P2L metadata 00:25:52.428 [2024-12-06 18:24:02.998483] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.154 ms 00:25:52.428 [2024-12-06 18:24:02.998494] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:52.688 [2024-12-06 18:24:03.035531] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:52.688 [2024-12-06 18:24:03.035571] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist band info metadata 00:25:52.688 [2024-12-06 18:24:03.035587] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 37.069 ms 00:25:52.688 [2024-12-06 18:24:03.035597] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:52.688 [2024-12-06 18:24:03.072428] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:52.688 [2024-12-06 18:24:03.072584] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist trim metadata 00:25:52.688 [2024-12-06 18:24:03.072611] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 36.845 ms 00:25:52.688 [2024-12-06 18:24:03.072622] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:52.688 [2024-12-06 18:24:03.108669] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:52.688 [2024-12-06 18:24:03.108712] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist superblock 00:25:52.688 [2024-12-06 18:24:03.108729] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 36.057 ms 00:25:52.688 [2024-12-06 18:24:03.108740] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:52.688 [2024-12-06 18:24:03.144216] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:52.688 [2024-12-06 18:24:03.144277] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL clean state 00:25:52.688 [2024-12-06 18:24:03.144296] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 35.427 ms 00:25:52.688 [2024-12-06 18:24:03.144306] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:52.688 [2024-12-06 18:24:03.144354] ftl_debug.c: 165:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Bands validity: 00:25:52.688 [2024-12-06 18:24:03.144371] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 1: 0 / 261120 wr_cnt: 0 state: free 00:25:52.688 [2024-12-06 18:24:03.144391] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 2: 0 / 261120 wr_cnt: 0 state: free 00:25:52.688 [2024-12-06 18:24:03.144402] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 3: 0 / 261120 wr_cnt: 0 state: free 00:25:52.688 [2024-12-06 18:24:03.144416] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 4: 0 / 261120 wr_cnt: 0 state: free 00:25:52.688 [2024-12-06 18:24:03.144427] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 5: 0 / 261120 wr_cnt: 0 state: free 00:25:52.689 [2024-12-06 18:24:03.144441] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 6: 0 / 261120 wr_cnt: 0 state: free 00:25:52.689 [2024-12-06 18:24:03.144452] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 7: 0 / 261120 wr_cnt: 0 state: free 00:25:52.689 [2024-12-06 18:24:03.144468] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 8: 0 / 261120 wr_cnt: 0 state: free 00:25:52.689 [2024-12-06 18:24:03.144479] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 9: 0 / 261120 wr_cnt: 0 state: free 00:25:52.689 [2024-12-06 18:24:03.144492] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 10: 0 / 261120 wr_cnt: 0 state: free 00:25:52.689 [2024-12-06 18:24:03.144503] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 11: 0 / 261120 wr_cnt: 0 state: free 00:25:52.689 [2024-12-06 18:24:03.144517] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 12: 0 / 261120 wr_cnt: 0 state: free 00:25:52.689 [2024-12-06 18:24:03.144528] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 13: 0 / 261120 wr_cnt: 0 state: free 00:25:52.689 [2024-12-06 18:24:03.144541] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 14: 0 / 261120 wr_cnt: 0 state: free 00:25:52.689 [2024-12-06 18:24:03.144552] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 15: 0 / 261120 wr_cnt: 0 state: free 00:25:52.689 [2024-12-06 18:24:03.144565] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 16: 0 / 261120 wr_cnt: 0 state: free 00:25:52.689 [2024-12-06 18:24:03.144575] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 17: 0 / 261120 wr_cnt: 0 state: free 00:25:52.689 [2024-12-06 18:24:03.144589] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 18: 0 / 261120 wr_cnt: 0 state: free 00:25:52.689 [2024-12-06 18:24:03.144600] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 19: 0 / 261120 wr_cnt: 0 state: free 00:25:52.689 [2024-12-06 18:24:03.144615] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 20: 0 / 261120 wr_cnt: 0 state: free 00:25:52.689 [2024-12-06 18:24:03.144626] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 21: 0 / 261120 wr_cnt: 0 state: free 00:25:52.689 [2024-12-06 18:24:03.144639] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 22: 0 / 261120 wr_cnt: 0 state: free 00:25:52.689 [2024-12-06 18:24:03.144650] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 23: 0 / 261120 wr_cnt: 0 state: free 00:25:52.689 [2024-12-06 18:24:03.144666] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 24: 0 / 261120 wr_cnt: 0 state: free 00:25:52.689 [2024-12-06 18:24:03.144677] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 25: 0 / 261120 wr_cnt: 0 state: free 00:25:52.689 [2024-12-06 18:24:03.144691] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 26: 0 / 261120 wr_cnt: 0 state: free 00:25:52.689 [2024-12-06 18:24:03.144701] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 27: 0 / 261120 wr_cnt: 0 state: free 00:25:52.689 [2024-12-06 18:24:03.144714] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 28: 0 / 261120 wr_cnt: 0 state: free 00:25:52.689 [2024-12-06 18:24:03.144724] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 29: 0 / 261120 wr_cnt: 0 state: free 00:25:52.689 [2024-12-06 18:24:03.144738] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 30: 0 / 261120 wr_cnt: 0 state: free 00:25:52.689 [2024-12-06 18:24:03.144749] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 31: 0 / 261120 wr_cnt: 0 state: free 00:25:52.689 [2024-12-06 18:24:03.144763] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 32: 0 / 261120 wr_cnt: 0 state: free 00:25:52.689 [2024-12-06 18:24:03.144773] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 33: 0 / 261120 wr_cnt: 0 state: free 00:25:52.689 [2024-12-06 18:24:03.144786] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 34: 0 / 261120 wr_cnt: 0 state: free 00:25:52.689 [2024-12-06 18:24:03.144797] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 35: 0 / 261120 wr_cnt: 0 state: free 00:25:52.689 [2024-12-06 18:24:03.144810] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 36: 0 / 261120 wr_cnt: 0 state: free 00:25:52.689 [2024-12-06 18:24:03.144820] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 37: 0 / 261120 wr_cnt: 0 state: free 00:25:52.689 [2024-12-06 18:24:03.144833] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 38: 0 / 261120 wr_cnt: 0 state: free 00:25:52.689 [2024-12-06 18:24:03.144844] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 39: 0 / 261120 wr_cnt: 0 state: free 00:25:52.689 [2024-12-06 18:24:03.144859] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 40: 0 / 261120 wr_cnt: 0 state: free 00:25:52.689 [2024-12-06 18:24:03.144870] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 41: 0 / 261120 wr_cnt: 0 state: free 00:25:52.689 [2024-12-06 18:24:03.144883] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 42: 0 / 261120 wr_cnt: 0 state: free 00:25:52.689 [2024-12-06 18:24:03.144894] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 43: 0 / 261120 wr_cnt: 0 state: free 00:25:52.689 [2024-12-06 18:24:03.144907] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 44: 0 / 261120 wr_cnt: 0 state: free 00:25:52.689 [2024-12-06 18:24:03.144917] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 45: 0 / 261120 wr_cnt: 0 state: free 00:25:52.689 [2024-12-06 18:24:03.144932] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 46: 0 / 261120 wr_cnt: 0 state: free 00:25:52.689 [2024-12-06 18:24:03.144943] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 47: 0 / 261120 wr_cnt: 0 state: free 00:25:52.689 [2024-12-06 18:24:03.144957] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 48: 0 / 261120 wr_cnt: 0 state: free 00:25:52.689 [2024-12-06 18:24:03.144968] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 49: 0 / 261120 wr_cnt: 0 state: free 00:25:52.689 [2024-12-06 18:24:03.144981] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 50: 0 / 261120 wr_cnt: 0 state: free 00:25:52.689 [2024-12-06 18:24:03.144991] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 51: 0 / 261120 wr_cnt: 0 state: free 00:25:52.689 [2024-12-06 18:24:03.145005] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 52: 0 / 261120 wr_cnt: 0 state: free 00:25:52.689 [2024-12-06 18:24:03.145015] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 53: 0 / 261120 wr_cnt: 0 state: free 00:25:52.689 [2024-12-06 18:24:03.145029] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 54: 0 / 261120 wr_cnt: 0 state: free 00:25:52.689 [2024-12-06 18:24:03.145040] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 55: 0 / 261120 wr_cnt: 0 state: free 00:25:52.689 [2024-12-06 18:24:03.145067] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 56: 0 / 261120 wr_cnt: 0 state: free 00:25:52.689 [2024-12-06 18:24:03.145077] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 57: 0 / 261120 wr_cnt: 0 state: free 00:25:52.689 [2024-12-06 18:24:03.145090] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 58: 0 / 261120 wr_cnt: 0 state: free 00:25:52.689 [2024-12-06 18:24:03.145101] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 59: 0 / 261120 wr_cnt: 0 state: free 00:25:52.689 [2024-12-06 18:24:03.145115] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 60: 0 / 261120 wr_cnt: 0 state: free 00:25:52.689 [2024-12-06 18:24:03.145126] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 61: 0 / 261120 wr_cnt: 0 state: free 00:25:52.689 [2024-12-06 18:24:03.145140] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 62: 0 / 261120 wr_cnt: 0 state: free 00:25:52.689 [2024-12-06 18:24:03.145152] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 63: 0 / 261120 wr_cnt: 0 state: free 00:25:52.689 [2024-12-06 18:24:03.145166] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 64: 0 / 261120 wr_cnt: 0 state: free 00:25:52.689 [2024-12-06 18:24:03.145176] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 65: 0 / 261120 wr_cnt: 0 state: free 00:25:52.689 [2024-12-06 18:24:03.145190] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 66: 0 / 261120 wr_cnt: 0 state: free 00:25:52.689 [2024-12-06 18:24:03.145200] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 67: 0 / 261120 wr_cnt: 0 state: free 00:25:52.689 [2024-12-06 18:24:03.145215] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 68: 0 / 261120 wr_cnt: 0 state: free 00:25:52.689 [2024-12-06 18:24:03.145225] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 69: 0 / 261120 wr_cnt: 0 state: free 00:25:52.689 [2024-12-06 18:24:03.145239] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 70: 0 / 261120 wr_cnt: 0 state: free 00:25:52.689 [2024-12-06 18:24:03.145249] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 71: 0 / 261120 wr_cnt: 0 state: free 00:25:52.689 [2024-12-06 18:24:03.145276] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 72: 0 / 261120 wr_cnt: 0 state: free 00:25:52.689 [2024-12-06 18:24:03.145288] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 73: 0 / 261120 wr_cnt: 0 state: free 00:25:52.689 [2024-12-06 18:24:03.145300] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 74: 0 / 261120 wr_cnt: 0 state: free 00:25:52.689 [2024-12-06 18:24:03.145311] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 75: 0 / 261120 wr_cnt: 0 state: free 00:25:52.689 [2024-12-06 18:24:03.145324] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 76: 0 / 261120 wr_cnt: 0 state: free 00:25:52.689 [2024-12-06 18:24:03.145335] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 77: 0 / 261120 wr_cnt: 0 state: free 00:25:52.689 [2024-12-06 18:24:03.145348] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 78: 0 / 261120 wr_cnt: 0 state: free 00:25:52.689 [2024-12-06 18:24:03.145359] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 79: 0 / 261120 wr_cnt: 0 state: free 00:25:52.689 [2024-12-06 18:24:03.145372] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 80: 0 / 261120 wr_cnt: 0 state: free 00:25:52.689 [2024-12-06 18:24:03.145383] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 81: 0 / 261120 wr_cnt: 0 state: free 00:25:52.689 [2024-12-06 18:24:03.145396] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 82: 0 / 261120 wr_cnt: 0 state: free 00:25:52.689 [2024-12-06 18:24:03.145407] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 83: 0 / 261120 wr_cnt: 0 state: free 00:25:52.689 [2024-12-06 18:24:03.145420] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 84: 0 / 261120 wr_cnt: 0 state: free 00:25:52.689 [2024-12-06 18:24:03.145431] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 85: 0 / 261120 wr_cnt: 0 state: free 00:25:52.689 [2024-12-06 18:24:03.145444] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 86: 0 / 261120 wr_cnt: 0 state: free 00:25:52.689 [2024-12-06 18:24:03.145454] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 87: 0 / 261120 wr_cnt: 0 state: free 00:25:52.689 [2024-12-06 18:24:03.145470] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 88: 0 / 261120 wr_cnt: 0 state: free 00:25:52.689 [2024-12-06 18:24:03.145481] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 89: 0 / 261120 wr_cnt: 0 state: free 00:25:52.689 [2024-12-06 18:24:03.145494] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 90: 0 / 261120 wr_cnt: 0 state: free 00:25:52.689 [2024-12-06 18:24:03.145504] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 91: 0 / 261120 wr_cnt: 0 state: free 00:25:52.689 [2024-12-06 18:24:03.145517] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 92: 0 / 261120 wr_cnt: 0 state: free 00:25:52.689 [2024-12-06 18:24:03.145528] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 93: 0 / 261120 wr_cnt: 0 state: free 00:25:52.690 [2024-12-06 18:24:03.145542] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 94: 0 / 261120 wr_cnt: 0 state: free 00:25:52.690 [2024-12-06 18:24:03.145554] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 95: 0 / 261120 wr_cnt: 0 state: free 00:25:52.690 [2024-12-06 18:24:03.145567] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 96: 0 / 261120 wr_cnt: 0 state: free 00:25:52.690 [2024-12-06 18:24:03.145578] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 97: 0 / 261120 wr_cnt: 0 state: free 00:25:52.690 [2024-12-06 18:24:03.145593] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 98: 0 / 261120 wr_cnt: 0 state: free 00:25:52.690 [2024-12-06 18:24:03.145603] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 99: 0 / 261120 wr_cnt: 0 state: free 00:25:52.690 [2024-12-06 18:24:03.145617] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 100: 0 / 261120 wr_cnt: 0 state: free 00:25:52.690 [2024-12-06 18:24:03.145634] ftl_debug.c: 211:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] 00:25:52.690 [2024-12-06 18:24:03.145647] ftl_debug.c: 212:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] device UUID: 246e93d4-98de-4402-8d08-9ff86994df11 00:25:52.690 [2024-12-06 18:24:03.145658] ftl_debug.c: 213:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total valid LBAs: 0 00:25:52.690 [2024-12-06 18:24:03.145673] ftl_debug.c: 214:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total writes: 960 00:25:52.690 [2024-12-06 18:24:03.145686] ftl_debug.c: 215:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] user writes: 0 00:25:52.690 [2024-12-06 18:24:03.145699] ftl_debug.c: 216:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] WAF: inf 00:25:52.690 [2024-12-06 18:24:03.145709] ftl_debug.c: 218:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] limits: 00:25:52.690 [2024-12-06 18:24:03.145721] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] crit: 0 00:25:52.690 [2024-12-06 18:24:03.145731] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] high: 0 00:25:52.690 [2024-12-06 18:24:03.145743] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] low: 0 00:25:52.690 [2024-12-06 18:24:03.145752] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] start: 0 00:25:52.690 [2024-12-06 18:24:03.145764] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:52.690 [2024-12-06 18:24:03.145774] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Dump statistics 00:25:52.690 [2024-12-06 18:24:03.145789] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.415 ms 00:25:52.690 [2024-12-06 18:24:03.145801] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:52.690 [2024-12-06 18:24:03.165806] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:52.690 [2024-12-06 18:24:03.165849] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize L2P 00:25:52.690 [2024-12-06 18:24:03.165865] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 19.977 ms 00:25:52.690 [2024-12-06 18:24:03.165876] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:52.690 [2024-12-06 18:24:03.166436] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:52.690 [2024-12-06 18:24:03.166450] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize P2L checkpointing 00:25:52.690 [2024-12-06 18:24:03.166466] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.528 ms 00:25:52.690 [2024-12-06 18:24:03.166477] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:52.690 [2024-12-06 18:24:03.231547] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:25:52.690 [2024-12-06 18:24:03.231612] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:25:52.690 [2024-12-06 18:24:03.231632] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:25:52.690 [2024-12-06 18:24:03.231643] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:52.690 [2024-12-06 18:24:03.231723] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:25:52.690 [2024-12-06 18:24:03.231735] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:25:52.690 [2024-12-06 18:24:03.231751] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:25:52.690 [2024-12-06 18:24:03.231762] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:52.690 [2024-12-06 18:24:03.231893] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:25:52.690 [2024-12-06 18:24:03.231908] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:25:52.690 [2024-12-06 18:24:03.231921] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:25:52.690 [2024-12-06 18:24:03.231932] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:52.690 [2024-12-06 18:24:03.231957] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:25:52.690 [2024-12-06 18:24:03.231968] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:25:52.690 [2024-12-06 18:24:03.231981] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:25:52.690 [2024-12-06 18:24:03.231994] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:52.949 [2024-12-06 18:24:03.356836] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:25:52.949 [2024-12-06 18:24:03.357078] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:25:52.949 [2024-12-06 18:24:03.357109] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:25:52.949 [2024-12-06 18:24:03.357120] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:52.949 [2024-12-06 18:24:03.458470] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:25:52.949 [2024-12-06 18:24:03.458535] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:25:52.949 [2024-12-06 18:24:03.458554] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:25:52.949 [2024-12-06 18:24:03.458568] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:52.949 [2024-12-06 18:24:03.458685] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:25:52.949 [2024-12-06 18:24:03.458697] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:25:52.949 [2024-12-06 18:24:03.458711] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:25:52.949 [2024-12-06 18:24:03.458721] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:52.949 [2024-12-06 18:24:03.458784] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:25:52.949 [2024-12-06 18:24:03.458796] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:25:52.949 [2024-12-06 18:24:03.458809] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:25:52.949 [2024-12-06 18:24:03.458819] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:52.949 [2024-12-06 18:24:03.458943] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:25:52.949 [2024-12-06 18:24:03.458957] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:25:52.949 [2024-12-06 18:24:03.458970] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:25:52.949 [2024-12-06 18:24:03.458980] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:52.949 [2024-12-06 18:24:03.459020] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:25:52.949 [2024-12-06 18:24:03.459033] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize superblock 00:25:52.949 [2024-12-06 18:24:03.459045] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:25:52.949 [2024-12-06 18:24:03.459056] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:52.949 [2024-12-06 18:24:03.459099] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:25:52.949 [2024-12-06 18:24:03.459110] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:25:52.949 [2024-12-06 18:24:03.459122] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:25:52.949 [2024-12-06 18:24:03.459132] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:52.950 [2024-12-06 18:24:03.459180] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:25:52.950 [2024-12-06 18:24:03.459192] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:25:52.950 [2024-12-06 18:24:03.459205] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:25:52.950 [2024-12-06 18:24:03.459215] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:52.950 [2024-12-06 18:24:03.459386] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL shutdown', duration = 536.467 ms, result 0 00:25:52.950 true 00:25:52.950 18:24:03 ftl.ftl_restore -- ftl/restore.sh@66 -- # killprocess 79007 00:25:52.950 18:24:03 ftl.ftl_restore -- common/autotest_common.sh@954 -- # '[' -z 79007 ']' 00:25:52.950 18:24:03 ftl.ftl_restore -- common/autotest_common.sh@958 -- # kill -0 79007 00:25:52.950 18:24:03 ftl.ftl_restore -- common/autotest_common.sh@959 -- # uname 00:25:52.950 18:24:03 ftl.ftl_restore -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:25:52.950 18:24:03 ftl.ftl_restore -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 79007 00:25:53.209 18:24:03 ftl.ftl_restore -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:25:53.209 18:24:03 ftl.ftl_restore -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:25:53.209 killing process with pid 79007 00:25:53.209 18:24:03 ftl.ftl_restore -- common/autotest_common.sh@972 -- # echo 'killing process with pid 79007' 00:25:53.209 18:24:03 ftl.ftl_restore -- common/autotest_common.sh@973 -- # kill 79007 00:25:53.209 18:24:03 ftl.ftl_restore -- common/autotest_common.sh@978 -- # wait 79007 00:25:57.404 18:24:07 ftl.ftl_restore -- ftl/restore.sh@69 -- # dd if=/dev/urandom of=/home/vagrant/spdk_repo/spdk/test/ftl/testfile bs=4K count=256K 00:26:01.625 262144+0 records in 00:26:01.625 262144+0 records out 00:26:01.625 1073741824 bytes (1.1 GB, 1.0 GiB) copied, 4.12384 s, 260 MB/s 00:26:01.625 18:24:11 ftl.ftl_restore -- ftl/restore.sh@70 -- # md5sum /home/vagrant/spdk_repo/spdk/test/ftl/testfile 00:26:03.004 18:24:13 ftl.ftl_restore -- ftl/restore.sh@73 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/ftl/testfile --ob=ftl0 --json=/home/vagrant/spdk_repo/spdk/test/ftl/config/ftl.json 00:26:03.004 [2024-12-06 18:24:13.296230] Starting SPDK v25.01-pre git sha1 1148849d6 / DPDK 24.03.0 initialization... 00:26:03.004 [2024-12-06 18:24:13.296389] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid79247 ] 00:26:03.004 [2024-12-06 18:24:13.482421] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:26:03.263 [2024-12-06 18:24:13.597306] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:26:03.522 [2024-12-06 18:24:13.970744] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:26:03.522 [2024-12-06 18:24:13.970818] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:26:03.782 [2024-12-06 18:24:14.133055] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:03.782 [2024-12-06 18:24:14.133295] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Check configuration 00:26:03.782 [2024-12-06 18:24:14.133319] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.005 ms 00:26:03.782 [2024-12-06 18:24:14.133330] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:03.782 [2024-12-06 18:24:14.133417] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:03.782 [2024-12-06 18:24:14.133436] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:26:03.782 [2024-12-06 18:24:14.133448] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.061 ms 00:26:03.782 [2024-12-06 18:24:14.133458] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:03.782 [2024-12-06 18:24:14.133483] mngt/ftl_mngt_bdev.c: 196:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using nvc0n1p0 as write buffer cache 00:26:03.782 [2024-12-06 18:24:14.134421] mngt/ftl_mngt_bdev.c: 236:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using bdev as NV Cache device 00:26:03.782 [2024-12-06 18:24:14.134443] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:03.782 [2024-12-06 18:24:14.134454] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:26:03.782 [2024-12-06 18:24:14.134465] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.966 ms 00:26:03.782 [2024-12-06 18:24:14.134475] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:03.782 [2024-12-06 18:24:14.135925] mngt/ftl_mngt_md.c: 455:ftl_mngt_load_sb: *NOTICE*: [FTL][ftl0] SHM: clean 0, shm_clean 0 00:26:03.782 [2024-12-06 18:24:14.155209] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:03.782 [2024-12-06 18:24:14.155254] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Load super block 00:26:03.782 [2024-12-06 18:24:14.155282] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 19.315 ms 00:26:03.782 [2024-12-06 18:24:14.155292] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:03.782 [2024-12-06 18:24:14.155396] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:03.782 [2024-12-06 18:24:14.155413] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Validate super block 00:26:03.782 [2024-12-06 18:24:14.155425] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.050 ms 00:26:03.782 [2024-12-06 18:24:14.155435] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:03.782 [2024-12-06 18:24:14.162348] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:03.782 [2024-12-06 18:24:14.162383] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:26:03.782 [2024-12-06 18:24:14.162403] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 6.843 ms 00:26:03.782 [2024-12-06 18:24:14.162421] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:03.782 [2024-12-06 18:24:14.162528] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:03.782 [2024-12-06 18:24:14.162542] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:26:03.782 [2024-12-06 18:24:14.162553] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.088 ms 00:26:03.782 [2024-12-06 18:24:14.162562] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:03.782 [2024-12-06 18:24:14.162605] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:03.782 [2024-12-06 18:24:14.162617] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Register IO device 00:26:03.782 [2024-12-06 18:24:14.162627] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.008 ms 00:26:03.782 [2024-12-06 18:24:14.162637] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:03.782 [2024-12-06 18:24:14.162670] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:26:03.782 [2024-12-06 18:24:14.167479] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:03.782 [2024-12-06 18:24:14.167514] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:26:03.782 [2024-12-06 18:24:14.167532] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.823 ms 00:26:03.782 [2024-12-06 18:24:14.167542] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:03.782 [2024-12-06 18:24:14.167579] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:03.782 [2024-12-06 18:24:14.167591] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Decorate bands 00:26:03.782 [2024-12-06 18:24:14.167601] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.009 ms 00:26:03.782 [2024-12-06 18:24:14.167611] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:03.782 [2024-12-06 18:24:14.167664] ftl_layout.c: 613:ftl_layout_setup: *NOTICE*: [FTL][ftl0] FTL layout setup mode 0 00:26:03.782 [2024-12-06 18:24:14.167705] upgrade/ftl_sb_v5.c: 278:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob load 0x150 bytes 00:26:03.782 [2024-12-06 18:24:14.167744] upgrade/ftl_sb_v5.c: 287:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] base layout blob load 0x48 bytes 00:26:03.782 [2024-12-06 18:24:14.167768] upgrade/ftl_sb_v5.c: 294:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] layout blob load 0x190 bytes 00:26:03.782 [2024-12-06 18:24:14.167864] upgrade/ftl_sb_v5.c: 92:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob store 0x150 bytes 00:26:03.782 [2024-12-06 18:24:14.167881] upgrade/ftl_sb_v5.c: 101:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] base layout blob store 0x48 bytes 00:26:03.782 [2024-12-06 18:24:14.167894] upgrade/ftl_sb_v5.c: 109:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] layout blob store 0x190 bytes 00:26:03.782 [2024-12-06 18:24:14.167907] ftl_layout.c: 685:ftl_layout_setup: *NOTICE*: [FTL][ftl0] Base device capacity: 103424.00 MiB 00:26:03.782 [2024-12-06 18:24:14.167919] ftl_layout.c: 687:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache device capacity: 5171.00 MiB 00:26:03.782 [2024-12-06 18:24:14.167930] ftl_layout.c: 689:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P entries: 20971520 00:26:03.782 [2024-12-06 18:24:14.167940] ftl_layout.c: 690:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P address size: 4 00:26:03.782 [2024-12-06 18:24:14.167957] ftl_layout.c: 691:ftl_layout_setup: *NOTICE*: [FTL][ftl0] P2L checkpoint pages: 2048 00:26:03.782 [2024-12-06 18:24:14.167967] ftl_layout.c: 692:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache chunk count 5 00:26:03.782 [2024-12-06 18:24:14.167977] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:03.782 [2024-12-06 18:24:14.167987] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize layout 00:26:03.782 [2024-12-06 18:24:14.167997] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.317 ms 00:26:03.782 [2024-12-06 18:24:14.168007] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:03.782 [2024-12-06 18:24:14.168079] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:03.782 [2024-12-06 18:24:14.168091] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Verify layout 00:26:03.782 [2024-12-06 18:24:14.168101] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.055 ms 00:26:03.782 [2024-12-06 18:24:14.168110] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:03.782 [2024-12-06 18:24:14.168213] ftl_layout.c: 768:ftl_layout_dump: *NOTICE*: [FTL][ftl0] NV cache layout: 00:26:03.782 [2024-12-06 18:24:14.168227] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb 00:26:03.782 [2024-12-06 18:24:14.168238] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:26:03.782 [2024-12-06 18:24:14.168248] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:26:03.782 [2024-12-06 18:24:14.168258] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region l2p 00:26:03.782 [2024-12-06 18:24:14.168290] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.12 MiB 00:26:03.782 [2024-12-06 18:24:14.168300] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 80.00 MiB 00:26:03.782 [2024-12-06 18:24:14.168310] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md 00:26:03.782 [2024-12-06 18:24:14.168320] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.12 MiB 00:26:03.782 [2024-12-06 18:24:14.168329] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:26:03.782 [2024-12-06 18:24:14.168339] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md_mirror 00:26:03.782 [2024-12-06 18:24:14.168350] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.62 MiB 00:26:03.782 [2024-12-06 18:24:14.168359] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:26:03.782 [2024-12-06 18:24:14.168396] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md 00:26:03.782 [2024-12-06 18:24:14.168406] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.88 MiB 00:26:03.782 [2024-12-06 18:24:14.168416] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:26:03.782 [2024-12-06 18:24:14.168425] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md_mirror 00:26:03.782 [2024-12-06 18:24:14.168434] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 114.00 MiB 00:26:03.782 [2024-12-06 18:24:14.168443] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:26:03.782 [2024-12-06 18:24:14.168452] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l0 00:26:03.782 [2024-12-06 18:24:14.168462] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 81.12 MiB 00:26:03.782 [2024-12-06 18:24:14.168471] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:26:03.782 [2024-12-06 18:24:14.168480] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l1 00:26:03.782 [2024-12-06 18:24:14.168490] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 89.12 MiB 00:26:03.782 [2024-12-06 18:24:14.168499] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:26:03.782 [2024-12-06 18:24:14.168508] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l2 00:26:03.782 [2024-12-06 18:24:14.168517] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 97.12 MiB 00:26:03.782 [2024-12-06 18:24:14.168526] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:26:03.782 [2024-12-06 18:24:14.168535] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l3 00:26:03.782 [2024-12-06 18:24:14.168545] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 105.12 MiB 00:26:03.782 [2024-12-06 18:24:14.168557] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:26:03.782 [2024-12-06 18:24:14.168566] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md 00:26:03.782 [2024-12-06 18:24:14.168575] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.12 MiB 00:26:03.782 [2024-12-06 18:24:14.168584] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:26:03.782 [2024-12-06 18:24:14.168592] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md_mirror 00:26:03.782 [2024-12-06 18:24:14.168601] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.38 MiB 00:26:03.783 [2024-12-06 18:24:14.168610] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:26:03.783 [2024-12-06 18:24:14.168619] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log 00:26:03.783 [2024-12-06 18:24:14.168629] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.62 MiB 00:26:03.783 [2024-12-06 18:24:14.168637] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:26:03.783 [2024-12-06 18:24:14.168646] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log_mirror 00:26:03.783 [2024-12-06 18:24:14.168655] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.75 MiB 00:26:03.783 [2024-12-06 18:24:14.168664] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:26:03.783 [2024-12-06 18:24:14.168674] ftl_layout.c: 775:ftl_layout_dump: *NOTICE*: [FTL][ftl0] Base device layout: 00:26:03.783 [2024-12-06 18:24:14.168684] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb_mirror 00:26:03.783 [2024-12-06 18:24:14.168693] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:26:03.783 [2024-12-06 18:24:14.168702] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:26:03.783 [2024-12-06 18:24:14.168712] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region vmap 00:26:03.783 [2024-12-06 18:24:14.168721] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 102400.25 MiB 00:26:03.783 [2024-12-06 18:24:14.168730] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 3.38 MiB 00:26:03.783 [2024-12-06 18:24:14.168739] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region data_btm 00:26:03.783 [2024-12-06 18:24:14.168747] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.25 MiB 00:26:03.783 [2024-12-06 18:24:14.168756] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 102400.00 MiB 00:26:03.783 [2024-12-06 18:24:14.168767] upgrade/ftl_sb_v5.c: 408:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - nvc: 00:26:03.783 [2024-12-06 18:24:14.168779] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x0 ver:5 blk_offs:0x0 blk_sz:0x20 00:26:03.783 [2024-12-06 18:24:14.168797] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x2 ver:0 blk_offs:0x20 blk_sz:0x5000 00:26:03.783 [2024-12-06 18:24:14.168807] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x3 ver:2 blk_offs:0x5020 blk_sz:0x80 00:26:03.783 [2024-12-06 18:24:14.168817] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x4 ver:2 blk_offs:0x50a0 blk_sz:0x80 00:26:03.783 [2024-12-06 18:24:14.168827] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xa ver:2 blk_offs:0x5120 blk_sz:0x800 00:26:03.783 [2024-12-06 18:24:14.168838] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xb ver:2 blk_offs:0x5920 blk_sz:0x800 00:26:03.783 [2024-12-06 18:24:14.168848] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xc ver:2 blk_offs:0x6120 blk_sz:0x800 00:26:03.783 [2024-12-06 18:24:14.168858] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xd ver:2 blk_offs:0x6920 blk_sz:0x800 00:26:03.783 [2024-12-06 18:24:14.168868] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xe ver:0 blk_offs:0x7120 blk_sz:0x40 00:26:03.783 [2024-12-06 18:24:14.168878] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xf ver:0 blk_offs:0x7160 blk_sz:0x40 00:26:03.783 [2024-12-06 18:24:14.168888] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x10 ver:1 blk_offs:0x71a0 blk_sz:0x20 00:26:03.783 [2024-12-06 18:24:14.168898] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x11 ver:1 blk_offs:0x71c0 blk_sz:0x20 00:26:03.783 [2024-12-06 18:24:14.168909] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x6 ver:2 blk_offs:0x71e0 blk_sz:0x20 00:26:03.783 [2024-12-06 18:24:14.168919] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x7 ver:2 blk_offs:0x7200 blk_sz:0x20 00:26:03.783 [2024-12-06 18:24:14.168929] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x7220 blk_sz:0x13c0e0 00:26:03.783 [2024-12-06 18:24:14.168940] upgrade/ftl_sb_v5.c: 422:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - base dev: 00:26:03.783 [2024-12-06 18:24:14.168952] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x1 ver:5 blk_offs:0x0 blk_sz:0x20 00:26:03.783 [2024-12-06 18:24:14.168962] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x20 blk_sz:0x20 00:26:03.783 [2024-12-06 18:24:14.168972] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x9 ver:0 blk_offs:0x40 blk_sz:0x1900000 00:26:03.783 [2024-12-06 18:24:14.168982] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x5 ver:0 blk_offs:0x1900040 blk_sz:0x360 00:26:03.783 [2024-12-06 18:24:14.168994] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x19003a0 blk_sz:0x3fc60 00:26:03.783 [2024-12-06 18:24:14.169005] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:03.783 [2024-12-06 18:24:14.169015] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Layout upgrade 00:26:03.783 [2024-12-06 18:24:14.169026] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.848 ms 00:26:03.783 [2024-12-06 18:24:14.169035] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:03.783 [2024-12-06 18:24:14.211217] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:03.783 [2024-12-06 18:24:14.211288] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:26:03.783 [2024-12-06 18:24:14.211305] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 42.193 ms 00:26:03.783 [2024-12-06 18:24:14.211321] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:03.783 [2024-12-06 18:24:14.211427] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:03.783 [2024-12-06 18:24:14.211438] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize band addresses 00:26:03.783 [2024-12-06 18:24:14.211449] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.055 ms 00:26:03.783 [2024-12-06 18:24:14.211459] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:03.783 [2024-12-06 18:24:14.266377] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:03.783 [2024-12-06 18:24:14.266615] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:26:03.783 [2024-12-06 18:24:14.266640] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 54.928 ms 00:26:03.783 [2024-12-06 18:24:14.266652] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:03.783 [2024-12-06 18:24:14.266710] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:03.783 [2024-12-06 18:24:14.266722] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:26:03.783 [2024-12-06 18:24:14.266738] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.003 ms 00:26:03.783 [2024-12-06 18:24:14.266748] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:03.783 [2024-12-06 18:24:14.267241] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:03.783 [2024-12-06 18:24:14.267255] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:26:03.783 [2024-12-06 18:24:14.267289] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.412 ms 00:26:03.783 [2024-12-06 18:24:14.267300] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:03.783 [2024-12-06 18:24:14.267421] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:03.783 [2024-12-06 18:24:14.267434] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:26:03.783 [2024-12-06 18:24:14.267451] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.098 ms 00:26:03.783 [2024-12-06 18:24:14.267461] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:03.783 [2024-12-06 18:24:14.287864] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:03.783 [2024-12-06 18:24:14.287913] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:26:03.783 [2024-12-06 18:24:14.287928] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 20.415 ms 00:26:03.783 [2024-12-06 18:24:14.287938] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:03.783 [2024-12-06 18:24:14.308199] ftl_nv_cache.c:1772:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: full chunks = 0, empty chunks = 4 00:26:03.783 [2024-12-06 18:24:14.308257] ftl_nv_cache.c:1776:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: state loaded successfully 00:26:03.783 [2024-12-06 18:24:14.308286] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:03.783 [2024-12-06 18:24:14.308297] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore NV cache metadata 00:26:03.783 [2024-12-06 18:24:14.308309] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 20.248 ms 00:26:03.783 [2024-12-06 18:24:14.308329] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:03.783 [2024-12-06 18:24:14.338668] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:03.783 [2024-12-06 18:24:14.338731] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore valid map metadata 00:26:03.783 [2024-12-06 18:24:14.338748] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 30.333 ms 00:26:03.783 [2024-12-06 18:24:14.338759] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:04.043 [2024-12-06 18:24:14.358503] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:04.043 [2024-12-06 18:24:14.358552] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore band info metadata 00:26:04.043 [2024-12-06 18:24:14.358568] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 19.639 ms 00:26:04.043 [2024-12-06 18:24:14.358578] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:04.043 [2024-12-06 18:24:14.376850] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:04.043 [2024-12-06 18:24:14.376889] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore trim metadata 00:26:04.043 [2024-12-06 18:24:14.376903] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 18.247 ms 00:26:04.043 [2024-12-06 18:24:14.376914] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:04.043 [2024-12-06 18:24:14.377729] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:04.043 [2024-12-06 18:24:14.377759] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize P2L checkpointing 00:26:04.043 [2024-12-06 18:24:14.377772] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.691 ms 00:26:04.043 [2024-12-06 18:24:14.377785] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:04.043 [2024-12-06 18:24:14.462610] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:04.043 [2024-12-06 18:24:14.462669] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore P2L checkpoints 00:26:04.043 [2024-12-06 18:24:14.462685] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 84.938 ms 00:26:04.043 [2024-12-06 18:24:14.462707] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:04.043 [2024-12-06 18:24:14.473935] ftl_l2p_cache.c: 458:ftl_l2p_cache_init: *NOTICE*: l2p maximum resident size is: 9 (of 10) MiB 00:26:04.043 [2024-12-06 18:24:14.477040] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:04.043 [2024-12-06 18:24:14.477070] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize L2P 00:26:04.043 [2024-12-06 18:24:14.477085] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 14.296 ms 00:26:04.043 [2024-12-06 18:24:14.477095] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:04.043 [2024-12-06 18:24:14.477215] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:04.043 [2024-12-06 18:24:14.477234] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore L2P 00:26:04.043 [2024-12-06 18:24:14.477245] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.025 ms 00:26:04.043 [2024-12-06 18:24:14.477255] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:04.043 [2024-12-06 18:24:14.477389] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:04.043 [2024-12-06 18:24:14.477406] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize band initialization 00:26:04.043 [2024-12-06 18:24:14.477417] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.041 ms 00:26:04.043 [2024-12-06 18:24:14.477426] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:04.043 [2024-12-06 18:24:14.477451] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:04.043 [2024-12-06 18:24:14.477462] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Start core poller 00:26:04.043 [2024-12-06 18:24:14.477473] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.004 ms 00:26:04.043 [2024-12-06 18:24:14.477483] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:04.043 [2024-12-06 18:24:14.477522] mngt/ftl_mngt_self_test.c: 208:ftl_mngt_self_test: *NOTICE*: [FTL][ftl0] Self test skipped 00:26:04.043 [2024-12-06 18:24:14.477540] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:04.043 [2024-12-06 18:24:14.477550] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Self test on startup 00:26:04.043 [2024-12-06 18:24:14.477560] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.018 ms 00:26:04.043 [2024-12-06 18:24:14.477570] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:04.043 [2024-12-06 18:24:14.515076] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:04.043 [2024-12-06 18:24:14.515119] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL dirty state 00:26:04.043 [2024-12-06 18:24:14.515134] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 37.547 ms 00:26:04.043 [2024-12-06 18:24:14.515155] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:04.043 [2024-12-06 18:24:14.515240] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:04.043 [2024-12-06 18:24:14.515256] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize initialization 00:26:04.043 [2024-12-06 18:24:14.515283] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.042 ms 00:26:04.043 [2024-12-06 18:24:14.515293] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:04.043 [2024-12-06 18:24:14.516525] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL startup', duration = 383.573 ms, result 0 00:26:04.993  [2024-12-06T18:24:16.943Z] Copying: 27/1024 [MB] (27 MBps) [2024-12-06T18:24:17.906Z] Copying: 54/1024 [MB] (26 MBps) [2024-12-06T18:24:18.839Z] Copying: 82/1024 [MB] (27 MBps) [2024-12-06T18:24:19.773Z] Copying: 109/1024 [MB] (27 MBps) [2024-12-06T18:24:20.708Z] Copying: 135/1024 [MB] (26 MBps) [2024-12-06T18:24:21.644Z] Copying: 162/1024 [MB] (27 MBps) [2024-12-06T18:24:22.580Z] Copying: 190/1024 [MB] (28 MBps) [2024-12-06T18:24:23.517Z] Copying: 218/1024 [MB] (27 MBps) [2024-12-06T18:24:24.896Z] Copying: 246/1024 [MB] (28 MBps) [2024-12-06T18:24:25.556Z] Copying: 274/1024 [MB] (28 MBps) [2024-12-06T18:24:26.932Z] Copying: 302/1024 [MB] (27 MBps) [2024-12-06T18:24:27.869Z] Copying: 328/1024 [MB] (26 MBps) [2024-12-06T18:24:28.803Z] Copying: 354/1024 [MB] (26 MBps) [2024-12-06T18:24:29.741Z] Copying: 381/1024 [MB] (27 MBps) [2024-12-06T18:24:30.684Z] Copying: 409/1024 [MB] (27 MBps) [2024-12-06T18:24:31.637Z] Copying: 436/1024 [MB] (27 MBps) [2024-12-06T18:24:32.574Z] Copying: 461/1024 [MB] (25 MBps) [2024-12-06T18:24:33.510Z] Copying: 487/1024 [MB] (25 MBps) [2024-12-06T18:24:34.888Z] Copying: 513/1024 [MB] (25 MBps) [2024-12-06T18:24:35.823Z] Copying: 539/1024 [MB] (26 MBps) [2024-12-06T18:24:36.761Z] Copying: 564/1024 [MB] (25 MBps) [2024-12-06T18:24:37.712Z] Copying: 590/1024 [MB] (25 MBps) [2024-12-06T18:24:38.647Z] Copying: 615/1024 [MB] (25 MBps) [2024-12-06T18:24:39.580Z] Copying: 641/1024 [MB] (25 MBps) [2024-12-06T18:24:40.513Z] Copying: 666/1024 [MB] (25 MBps) [2024-12-06T18:24:41.887Z] Copying: 692/1024 [MB] (25 MBps) [2024-12-06T18:24:42.823Z] Copying: 718/1024 [MB] (25 MBps) [2024-12-06T18:24:43.761Z] Copying: 743/1024 [MB] (25 MBps) [2024-12-06T18:24:44.698Z] Copying: 769/1024 [MB] (25 MBps) [2024-12-06T18:24:45.636Z] Copying: 795/1024 [MB] (26 MBps) [2024-12-06T18:24:46.607Z] Copying: 821/1024 [MB] (26 MBps) [2024-12-06T18:24:47.544Z] Copying: 847/1024 [MB] (25 MBps) [2024-12-06T18:24:48.480Z] Copying: 873/1024 [MB] (26 MBps) [2024-12-06T18:24:49.858Z] Copying: 900/1024 [MB] (27 MBps) [2024-12-06T18:24:50.796Z] Copying: 927/1024 [MB] (26 MBps) [2024-12-06T18:24:51.730Z] Copying: 953/1024 [MB] (26 MBps) [2024-12-06T18:24:52.665Z] Copying: 980/1024 [MB] (26 MBps) [2024-12-06T18:24:53.233Z] Copying: 1006/1024 [MB] (26 MBps) [2024-12-06T18:24:53.233Z] Copying: 1024/1024 [MB] (average 26 MBps)[2024-12-06 18:24:53.129707] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:42.657 [2024-12-06 18:24:53.129763] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinit core IO channel 00:26:42.657 [2024-12-06 18:24:53.129779] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.002 ms 00:26:42.657 [2024-12-06 18:24:53.129790] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:42.657 [2024-12-06 18:24:53.129811] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:26:42.657 [2024-12-06 18:24:53.134160] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:42.657 [2024-12-06 18:24:53.134194] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Unregister IO device 00:26:42.657 [2024-12-06 18:24:53.134213] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.339 ms 00:26:42.657 [2024-12-06 18:24:53.134223] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:42.657 [2024-12-06 18:24:53.136040] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:42.657 [2024-12-06 18:24:53.136080] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Stop core poller 00:26:42.657 [2024-12-06 18:24:53.136092] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.796 ms 00:26:42.657 [2024-12-06 18:24:53.136102] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:42.657 [2024-12-06 18:24:53.153800] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:42.657 [2024-12-06 18:24:53.153840] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist L2P 00:26:42.657 [2024-12-06 18:24:53.153853] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 17.709 ms 00:26:42.657 [2024-12-06 18:24:53.153863] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:42.657 [2024-12-06 18:24:53.158917] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:42.657 [2024-12-06 18:24:53.158951] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finish L2P trims 00:26:42.657 [2024-12-06 18:24:53.158963] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 5.023 ms 00:26:42.657 [2024-12-06 18:24:53.158973] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:42.657 [2024-12-06 18:24:53.195930] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:42.657 [2024-12-06 18:24:53.195971] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist NV cache metadata 00:26:42.657 [2024-12-06 18:24:53.195985] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 36.961 ms 00:26:42.657 [2024-12-06 18:24:53.195995] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:42.657 [2024-12-06 18:24:53.217560] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:42.657 [2024-12-06 18:24:53.217601] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist valid map metadata 00:26:42.657 [2024-12-06 18:24:53.217615] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 21.563 ms 00:26:42.657 [2024-12-06 18:24:53.217625] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:42.657 [2024-12-06 18:24:53.217755] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:42.657 [2024-12-06 18:24:53.217772] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist P2L metadata 00:26:42.657 [2024-12-06 18:24:53.217783] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.090 ms 00:26:42.657 [2024-12-06 18:24:53.217793] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:42.917 [2024-12-06 18:24:53.255286] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:42.917 [2024-12-06 18:24:53.255329] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist band info metadata 00:26:42.917 [2024-12-06 18:24:53.255343] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 37.537 ms 00:26:42.917 [2024-12-06 18:24:53.255353] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:42.917 [2024-12-06 18:24:53.292140] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:42.917 [2024-12-06 18:24:53.292185] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist trim metadata 00:26:42.917 [2024-12-06 18:24:53.292198] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 36.806 ms 00:26:42.917 [2024-12-06 18:24:53.292209] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:42.917 [2024-12-06 18:24:53.328093] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:42.917 [2024-12-06 18:24:53.328149] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist superblock 00:26:42.917 [2024-12-06 18:24:53.328163] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 35.903 ms 00:26:42.917 [2024-12-06 18:24:53.328172] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:42.917 [2024-12-06 18:24:53.365134] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:42.917 [2024-12-06 18:24:53.365175] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL clean state 00:26:42.917 [2024-12-06 18:24:53.365188] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 36.909 ms 00:26:42.917 [2024-12-06 18:24:53.365197] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:42.917 [2024-12-06 18:24:53.365235] ftl_debug.c: 165:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Bands validity: 00:26:42.917 [2024-12-06 18:24:53.365253] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 1: 0 / 261120 wr_cnt: 0 state: free 00:26:42.917 [2024-12-06 18:24:53.365288] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 2: 0 / 261120 wr_cnt: 0 state: free 00:26:42.917 [2024-12-06 18:24:53.365299] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 3: 0 / 261120 wr_cnt: 0 state: free 00:26:42.917 [2024-12-06 18:24:53.365310] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 4: 0 / 261120 wr_cnt: 0 state: free 00:26:42.917 [2024-12-06 18:24:53.365321] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 5: 0 / 261120 wr_cnt: 0 state: free 00:26:42.917 [2024-12-06 18:24:53.365344] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 6: 0 / 261120 wr_cnt: 0 state: free 00:26:42.917 [2024-12-06 18:24:53.365355] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 7: 0 / 261120 wr_cnt: 0 state: free 00:26:42.917 [2024-12-06 18:24:53.365366] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 8: 0 / 261120 wr_cnt: 0 state: free 00:26:42.917 [2024-12-06 18:24:53.365377] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 9: 0 / 261120 wr_cnt: 0 state: free 00:26:42.917 [2024-12-06 18:24:53.365387] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 10: 0 / 261120 wr_cnt: 0 state: free 00:26:42.917 [2024-12-06 18:24:53.365398] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 11: 0 / 261120 wr_cnt: 0 state: free 00:26:42.917 [2024-12-06 18:24:53.365409] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 12: 0 / 261120 wr_cnt: 0 state: free 00:26:42.917 [2024-12-06 18:24:53.365419] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 13: 0 / 261120 wr_cnt: 0 state: free 00:26:42.917 [2024-12-06 18:24:53.365430] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 14: 0 / 261120 wr_cnt: 0 state: free 00:26:42.917 [2024-12-06 18:24:53.365440] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 15: 0 / 261120 wr_cnt: 0 state: free 00:26:42.917 [2024-12-06 18:24:53.365450] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 16: 0 / 261120 wr_cnt: 0 state: free 00:26:42.917 [2024-12-06 18:24:53.365461] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 17: 0 / 261120 wr_cnt: 0 state: free 00:26:42.917 [2024-12-06 18:24:53.365471] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 18: 0 / 261120 wr_cnt: 0 state: free 00:26:42.917 [2024-12-06 18:24:53.365483] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 19: 0 / 261120 wr_cnt: 0 state: free 00:26:42.917 [2024-12-06 18:24:53.365494] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 20: 0 / 261120 wr_cnt: 0 state: free 00:26:42.917 [2024-12-06 18:24:53.365504] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 21: 0 / 261120 wr_cnt: 0 state: free 00:26:42.917 [2024-12-06 18:24:53.365514] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 22: 0 / 261120 wr_cnt: 0 state: free 00:26:42.917 [2024-12-06 18:24:53.365525] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 23: 0 / 261120 wr_cnt: 0 state: free 00:26:42.917 [2024-12-06 18:24:53.365535] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 24: 0 / 261120 wr_cnt: 0 state: free 00:26:42.917 [2024-12-06 18:24:53.365545] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 25: 0 / 261120 wr_cnt: 0 state: free 00:26:42.917 [2024-12-06 18:24:53.365556] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 26: 0 / 261120 wr_cnt: 0 state: free 00:26:42.917 [2024-12-06 18:24:53.365566] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 27: 0 / 261120 wr_cnt: 0 state: free 00:26:42.917 [2024-12-06 18:24:53.365576] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 28: 0 / 261120 wr_cnt: 0 state: free 00:26:42.917 [2024-12-06 18:24:53.365587] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 29: 0 / 261120 wr_cnt: 0 state: free 00:26:42.917 [2024-12-06 18:24:53.365597] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 30: 0 / 261120 wr_cnt: 0 state: free 00:26:42.917 [2024-12-06 18:24:53.365608] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 31: 0 / 261120 wr_cnt: 0 state: free 00:26:42.917 [2024-12-06 18:24:53.365619] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 32: 0 / 261120 wr_cnt: 0 state: free 00:26:42.917 [2024-12-06 18:24:53.365630] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 33: 0 / 261120 wr_cnt: 0 state: free 00:26:42.917 [2024-12-06 18:24:53.365640] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 34: 0 / 261120 wr_cnt: 0 state: free 00:26:42.917 [2024-12-06 18:24:53.365651] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 35: 0 / 261120 wr_cnt: 0 state: free 00:26:42.917 [2024-12-06 18:24:53.365661] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 36: 0 / 261120 wr_cnt: 0 state: free 00:26:42.917 [2024-12-06 18:24:53.365671] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 37: 0 / 261120 wr_cnt: 0 state: free 00:26:42.917 [2024-12-06 18:24:53.365681] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 38: 0 / 261120 wr_cnt: 0 state: free 00:26:42.917 [2024-12-06 18:24:53.365691] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 39: 0 / 261120 wr_cnt: 0 state: free 00:26:42.917 [2024-12-06 18:24:53.365702] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 40: 0 / 261120 wr_cnt: 0 state: free 00:26:42.917 [2024-12-06 18:24:53.365712] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 41: 0 / 261120 wr_cnt: 0 state: free 00:26:42.917 [2024-12-06 18:24:53.365722] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 42: 0 / 261120 wr_cnt: 0 state: free 00:26:42.917 [2024-12-06 18:24:53.365733] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 43: 0 / 261120 wr_cnt: 0 state: free 00:26:42.917 [2024-12-06 18:24:53.365743] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 44: 0 / 261120 wr_cnt: 0 state: free 00:26:42.917 [2024-12-06 18:24:53.365753] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 45: 0 / 261120 wr_cnt: 0 state: free 00:26:42.917 [2024-12-06 18:24:53.365763] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 46: 0 / 261120 wr_cnt: 0 state: free 00:26:42.917 [2024-12-06 18:24:53.365773] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 47: 0 / 261120 wr_cnt: 0 state: free 00:26:42.917 [2024-12-06 18:24:53.365784] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 48: 0 / 261120 wr_cnt: 0 state: free 00:26:42.917 [2024-12-06 18:24:53.365794] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 49: 0 / 261120 wr_cnt: 0 state: free 00:26:42.917 [2024-12-06 18:24:53.365804] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 50: 0 / 261120 wr_cnt: 0 state: free 00:26:42.917 [2024-12-06 18:24:53.365814] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 51: 0 / 261120 wr_cnt: 0 state: free 00:26:42.917 [2024-12-06 18:24:53.365824] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 52: 0 / 261120 wr_cnt: 0 state: free 00:26:42.917 [2024-12-06 18:24:53.365835] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 53: 0 / 261120 wr_cnt: 0 state: free 00:26:42.917 [2024-12-06 18:24:53.365845] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 54: 0 / 261120 wr_cnt: 0 state: free 00:26:42.918 [2024-12-06 18:24:53.365855] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 55: 0 / 261120 wr_cnt: 0 state: free 00:26:42.918 [2024-12-06 18:24:53.365865] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 56: 0 / 261120 wr_cnt: 0 state: free 00:26:42.918 [2024-12-06 18:24:53.365876] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 57: 0 / 261120 wr_cnt: 0 state: free 00:26:42.918 [2024-12-06 18:24:53.365886] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 58: 0 / 261120 wr_cnt: 0 state: free 00:26:42.918 [2024-12-06 18:24:53.365896] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 59: 0 / 261120 wr_cnt: 0 state: free 00:26:42.918 [2024-12-06 18:24:53.365906] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 60: 0 / 261120 wr_cnt: 0 state: free 00:26:42.918 [2024-12-06 18:24:53.365916] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 61: 0 / 261120 wr_cnt: 0 state: free 00:26:42.918 [2024-12-06 18:24:53.365927] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 62: 0 / 261120 wr_cnt: 0 state: free 00:26:42.918 [2024-12-06 18:24:53.365937] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 63: 0 / 261120 wr_cnt: 0 state: free 00:26:42.918 [2024-12-06 18:24:53.365948] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 64: 0 / 261120 wr_cnt: 0 state: free 00:26:42.918 [2024-12-06 18:24:53.365958] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 65: 0 / 261120 wr_cnt: 0 state: free 00:26:42.918 [2024-12-06 18:24:53.365969] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 66: 0 / 261120 wr_cnt: 0 state: free 00:26:42.918 [2024-12-06 18:24:53.365979] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 67: 0 / 261120 wr_cnt: 0 state: free 00:26:42.918 [2024-12-06 18:24:53.365989] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 68: 0 / 261120 wr_cnt: 0 state: free 00:26:42.918 [2024-12-06 18:24:53.366000] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 69: 0 / 261120 wr_cnt: 0 state: free 00:26:42.918 [2024-12-06 18:24:53.366010] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 70: 0 / 261120 wr_cnt: 0 state: free 00:26:42.918 [2024-12-06 18:24:53.366020] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 71: 0 / 261120 wr_cnt: 0 state: free 00:26:42.918 [2024-12-06 18:24:53.366031] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 72: 0 / 261120 wr_cnt: 0 state: free 00:26:42.918 [2024-12-06 18:24:53.366041] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 73: 0 / 261120 wr_cnt: 0 state: free 00:26:42.918 [2024-12-06 18:24:53.366051] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 74: 0 / 261120 wr_cnt: 0 state: free 00:26:42.918 [2024-12-06 18:24:53.366062] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 75: 0 / 261120 wr_cnt: 0 state: free 00:26:42.918 [2024-12-06 18:24:53.366072] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 76: 0 / 261120 wr_cnt: 0 state: free 00:26:42.918 [2024-12-06 18:24:53.366082] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 77: 0 / 261120 wr_cnt: 0 state: free 00:26:42.918 [2024-12-06 18:24:53.366092] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 78: 0 / 261120 wr_cnt: 0 state: free 00:26:42.918 [2024-12-06 18:24:53.366102] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 79: 0 / 261120 wr_cnt: 0 state: free 00:26:42.918 [2024-12-06 18:24:53.366113] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 80: 0 / 261120 wr_cnt: 0 state: free 00:26:42.918 [2024-12-06 18:24:53.366123] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 81: 0 / 261120 wr_cnt: 0 state: free 00:26:42.918 [2024-12-06 18:24:53.366133] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 82: 0 / 261120 wr_cnt: 0 state: free 00:26:42.918 [2024-12-06 18:24:53.366144] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 83: 0 / 261120 wr_cnt: 0 state: free 00:26:42.918 [2024-12-06 18:24:53.366154] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 84: 0 / 261120 wr_cnt: 0 state: free 00:26:42.918 [2024-12-06 18:24:53.366165] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 85: 0 / 261120 wr_cnt: 0 state: free 00:26:42.918 [2024-12-06 18:24:53.366175] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 86: 0 / 261120 wr_cnt: 0 state: free 00:26:42.918 [2024-12-06 18:24:53.366186] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 87: 0 / 261120 wr_cnt: 0 state: free 00:26:42.918 [2024-12-06 18:24:53.366196] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 88: 0 / 261120 wr_cnt: 0 state: free 00:26:42.918 [2024-12-06 18:24:53.366206] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 89: 0 / 261120 wr_cnt: 0 state: free 00:26:42.918 [2024-12-06 18:24:53.366217] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 90: 0 / 261120 wr_cnt: 0 state: free 00:26:42.918 [2024-12-06 18:24:53.366227] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 91: 0 / 261120 wr_cnt: 0 state: free 00:26:42.918 [2024-12-06 18:24:53.366238] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 92: 0 / 261120 wr_cnt: 0 state: free 00:26:42.918 [2024-12-06 18:24:53.366248] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 93: 0 / 261120 wr_cnt: 0 state: free 00:26:42.918 [2024-12-06 18:24:53.366258] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 94: 0 / 261120 wr_cnt: 0 state: free 00:26:42.918 [2024-12-06 18:24:53.366277] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 95: 0 / 261120 wr_cnt: 0 state: free 00:26:42.918 [2024-12-06 18:24:53.366288] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 96: 0 / 261120 wr_cnt: 0 state: free 00:26:42.918 [2024-12-06 18:24:53.366299] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 97: 0 / 261120 wr_cnt: 0 state: free 00:26:42.918 [2024-12-06 18:24:53.366310] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 98: 0 / 261120 wr_cnt: 0 state: free 00:26:42.918 [2024-12-06 18:24:53.366320] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 99: 0 / 261120 wr_cnt: 0 state: free 00:26:42.918 [2024-12-06 18:24:53.366331] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 100: 0 / 261120 wr_cnt: 0 state: free 00:26:42.918 [2024-12-06 18:24:53.366348] ftl_debug.c: 211:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] 00:26:42.918 [2024-12-06 18:24:53.366363] ftl_debug.c: 212:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] device UUID: 246e93d4-98de-4402-8d08-9ff86994df11 00:26:42.918 [2024-12-06 18:24:53.366373] ftl_debug.c: 213:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total valid LBAs: 0 00:26:42.918 [2024-12-06 18:24:53.366382] ftl_debug.c: 214:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total writes: 960 00:26:42.918 [2024-12-06 18:24:53.366399] ftl_debug.c: 215:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] user writes: 0 00:26:42.918 [2024-12-06 18:24:53.366410] ftl_debug.c: 216:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] WAF: inf 00:26:42.918 [2024-12-06 18:24:53.366419] ftl_debug.c: 218:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] limits: 00:26:42.918 [2024-12-06 18:24:53.366439] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] crit: 0 00:26:42.918 [2024-12-06 18:24:53.366449] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] high: 0 00:26:42.918 [2024-12-06 18:24:53.366458] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] low: 0 00:26:42.918 [2024-12-06 18:24:53.366467] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] start: 0 00:26:42.918 [2024-12-06 18:24:53.366477] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:42.918 [2024-12-06 18:24:53.366487] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Dump statistics 00:26:42.918 [2024-12-06 18:24:53.366497] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.245 ms 00:26:42.918 [2024-12-06 18:24:53.366507] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:42.918 [2024-12-06 18:24:53.386663] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:42.918 [2024-12-06 18:24:53.386700] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize L2P 00:26:42.918 [2024-12-06 18:24:53.386712] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 20.151 ms 00:26:42.918 [2024-12-06 18:24:53.386722] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:42.918 [2024-12-06 18:24:53.387248] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:42.918 [2024-12-06 18:24:53.387279] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize P2L checkpointing 00:26:42.918 [2024-12-06 18:24:53.387291] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.506 ms 00:26:42.918 [2024-12-06 18:24:53.387307] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:42.918 [2024-12-06 18:24:53.439677] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:26:42.918 [2024-12-06 18:24:53.439726] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:26:42.918 [2024-12-06 18:24:53.439740] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:26:42.918 [2024-12-06 18:24:53.439752] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:42.918 [2024-12-06 18:24:53.439812] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:26:42.918 [2024-12-06 18:24:53.439822] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:26:42.918 [2024-12-06 18:24:53.439833] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:26:42.918 [2024-12-06 18:24:53.439848] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:42.918 [2024-12-06 18:24:53.439935] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:26:42.918 [2024-12-06 18:24:53.439949] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:26:42.918 [2024-12-06 18:24:53.439959] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:26:42.918 [2024-12-06 18:24:53.439969] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:42.918 [2024-12-06 18:24:53.439985] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:26:42.918 [2024-12-06 18:24:53.439996] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:26:42.918 [2024-12-06 18:24:53.440006] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:26:42.918 [2024-12-06 18:24:53.440016] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:43.178 [2024-12-06 18:24:53.567478] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:26:43.178 [2024-12-06 18:24:53.567661] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:26:43.178 [2024-12-06 18:24:53.567686] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:26:43.178 [2024-12-06 18:24:53.567696] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:43.178 [2024-12-06 18:24:53.670294] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:26:43.178 [2024-12-06 18:24:53.670475] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:26:43.178 [2024-12-06 18:24:53.670499] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:26:43.178 [2024-12-06 18:24:53.670516] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:43.178 [2024-12-06 18:24:53.670605] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:26:43.178 [2024-12-06 18:24:53.670617] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:26:43.178 [2024-12-06 18:24:53.670627] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:26:43.178 [2024-12-06 18:24:53.670637] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:43.178 [2024-12-06 18:24:53.670684] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:26:43.178 [2024-12-06 18:24:53.670696] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:26:43.178 [2024-12-06 18:24:53.670706] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:26:43.178 [2024-12-06 18:24:53.670716] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:43.178 [2024-12-06 18:24:53.670850] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:26:43.178 [2024-12-06 18:24:53.670864] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:26:43.178 [2024-12-06 18:24:53.670874] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:26:43.178 [2024-12-06 18:24:53.670884] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:43.178 [2024-12-06 18:24:53.670920] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:26:43.178 [2024-12-06 18:24:53.670933] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize superblock 00:26:43.178 [2024-12-06 18:24:53.670942] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:26:43.178 [2024-12-06 18:24:53.670952] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:43.178 [2024-12-06 18:24:53.670988] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:26:43.178 [2024-12-06 18:24:53.671003] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:26:43.178 [2024-12-06 18:24:53.671013] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:26:43.178 [2024-12-06 18:24:53.671022] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:43.178 [2024-12-06 18:24:53.671063] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:26:43.178 [2024-12-06 18:24:53.671074] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:26:43.178 [2024-12-06 18:24:53.671085] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:26:43.178 [2024-12-06 18:24:53.671095] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:43.178 [2024-12-06 18:24:53.671205] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL shutdown', duration = 542.345 ms, result 0 00:26:44.601 00:26:44.601 00:26:44.601 18:24:54 ftl.ftl_restore -- ftl/restore.sh@74 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=ftl0 --of=/home/vagrant/spdk_repo/spdk/test/ftl/testfile --json=/home/vagrant/spdk_repo/spdk/test/ftl/config/ftl.json --count=262144 00:26:44.601 [2024-12-06 18:24:54.922234] Starting SPDK v25.01-pre git sha1 1148849d6 / DPDK 24.03.0 initialization... 00:26:44.601 [2024-12-06 18:24:54.922364] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid79667 ] 00:26:44.601 [2024-12-06 18:24:55.102386] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:26:44.859 [2024-12-06 18:24:55.217623] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:26:45.117 [2024-12-06 18:24:55.588362] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:26:45.117 [2024-12-06 18:24:55.588434] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:26:45.376 [2024-12-06 18:24:55.750261] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:45.376 [2024-12-06 18:24:55.750339] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Check configuration 00:26:45.376 [2024-12-06 18:24:55.750356] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.005 ms 00:26:45.376 [2024-12-06 18:24:55.750366] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:45.376 [2024-12-06 18:24:55.750426] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:45.376 [2024-12-06 18:24:55.750442] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:26:45.376 [2024-12-06 18:24:55.750453] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.040 ms 00:26:45.376 [2024-12-06 18:24:55.750463] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:45.377 [2024-12-06 18:24:55.750484] mngt/ftl_mngt_bdev.c: 196:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using nvc0n1p0 as write buffer cache 00:26:45.377 [2024-12-06 18:24:55.751465] mngt/ftl_mngt_bdev.c: 236:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using bdev as NV Cache device 00:26:45.377 [2024-12-06 18:24:55.751488] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:45.377 [2024-12-06 18:24:55.751499] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:26:45.377 [2024-12-06 18:24:55.751510] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.009 ms 00:26:45.377 [2024-12-06 18:24:55.751520] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:45.377 [2024-12-06 18:24:55.752898] mngt/ftl_mngt_md.c: 455:ftl_mngt_load_sb: *NOTICE*: [FTL][ftl0] SHM: clean 0, shm_clean 0 00:26:45.377 [2024-12-06 18:24:55.771480] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:45.377 [2024-12-06 18:24:55.771523] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Load super block 00:26:45.377 [2024-12-06 18:24:55.771539] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 18.611 ms 00:26:45.377 [2024-12-06 18:24:55.771549] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:45.377 [2024-12-06 18:24:55.771619] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:45.377 [2024-12-06 18:24:55.771632] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Validate super block 00:26:45.377 [2024-12-06 18:24:55.771643] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.024 ms 00:26:45.377 [2024-12-06 18:24:55.771653] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:45.377 [2024-12-06 18:24:55.778330] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:45.377 [2024-12-06 18:24:55.778536] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:26:45.377 [2024-12-06 18:24:55.778558] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 6.615 ms 00:26:45.377 [2024-12-06 18:24:55.778573] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:45.377 [2024-12-06 18:24:55.778654] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:45.377 [2024-12-06 18:24:55.778667] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:26:45.377 [2024-12-06 18:24:55.778678] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.058 ms 00:26:45.377 [2024-12-06 18:24:55.778687] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:45.377 [2024-12-06 18:24:55.778731] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:45.377 [2024-12-06 18:24:55.778743] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Register IO device 00:26:45.377 [2024-12-06 18:24:55.778753] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.010 ms 00:26:45.377 [2024-12-06 18:24:55.778763] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:45.377 [2024-12-06 18:24:55.778793] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:26:45.377 [2024-12-06 18:24:55.783667] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:45.377 [2024-12-06 18:24:55.783703] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:26:45.377 [2024-12-06 18:24:55.783719] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.888 ms 00:26:45.377 [2024-12-06 18:24:55.783729] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:45.377 [2024-12-06 18:24:55.783762] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:45.377 [2024-12-06 18:24:55.783774] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Decorate bands 00:26:45.377 [2024-12-06 18:24:55.783785] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.009 ms 00:26:45.377 [2024-12-06 18:24:55.783794] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:45.377 [2024-12-06 18:24:55.783846] ftl_layout.c: 613:ftl_layout_setup: *NOTICE*: [FTL][ftl0] FTL layout setup mode 0 00:26:45.377 [2024-12-06 18:24:55.783871] upgrade/ftl_sb_v5.c: 278:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob load 0x150 bytes 00:26:45.377 [2024-12-06 18:24:55.783906] upgrade/ftl_sb_v5.c: 287:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] base layout blob load 0x48 bytes 00:26:45.377 [2024-12-06 18:24:55.783927] upgrade/ftl_sb_v5.c: 294:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] layout blob load 0x190 bytes 00:26:45.377 [2024-12-06 18:24:55.784015] upgrade/ftl_sb_v5.c: 92:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob store 0x150 bytes 00:26:45.377 [2024-12-06 18:24:55.784028] upgrade/ftl_sb_v5.c: 101:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] base layout blob store 0x48 bytes 00:26:45.377 [2024-12-06 18:24:55.784041] upgrade/ftl_sb_v5.c: 109:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] layout blob store 0x190 bytes 00:26:45.377 [2024-12-06 18:24:55.784054] ftl_layout.c: 685:ftl_layout_setup: *NOTICE*: [FTL][ftl0] Base device capacity: 103424.00 MiB 00:26:45.377 [2024-12-06 18:24:55.784066] ftl_layout.c: 687:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache device capacity: 5171.00 MiB 00:26:45.377 [2024-12-06 18:24:55.784077] ftl_layout.c: 689:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P entries: 20971520 00:26:45.377 [2024-12-06 18:24:55.784087] ftl_layout.c: 690:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P address size: 4 00:26:45.377 [2024-12-06 18:24:55.784099] ftl_layout.c: 691:ftl_layout_setup: *NOTICE*: [FTL][ftl0] P2L checkpoint pages: 2048 00:26:45.377 [2024-12-06 18:24:55.784109] ftl_layout.c: 692:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache chunk count 5 00:26:45.377 [2024-12-06 18:24:55.784120] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:45.377 [2024-12-06 18:24:55.784130] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize layout 00:26:45.377 [2024-12-06 18:24:55.784140] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.277 ms 00:26:45.377 [2024-12-06 18:24:55.784150] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:45.377 [2024-12-06 18:24:55.784220] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:45.377 [2024-12-06 18:24:55.784231] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Verify layout 00:26:45.377 [2024-12-06 18:24:55.784241] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.055 ms 00:26:45.377 [2024-12-06 18:24:55.784251] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:45.377 [2024-12-06 18:24:55.784369] ftl_layout.c: 768:ftl_layout_dump: *NOTICE*: [FTL][ftl0] NV cache layout: 00:26:45.377 [2024-12-06 18:24:55.784385] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb 00:26:45.377 [2024-12-06 18:24:55.784396] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:26:45.377 [2024-12-06 18:24:55.784406] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:26:45.377 [2024-12-06 18:24:55.784417] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region l2p 00:26:45.377 [2024-12-06 18:24:55.784426] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.12 MiB 00:26:45.377 [2024-12-06 18:24:55.784435] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 80.00 MiB 00:26:45.377 [2024-12-06 18:24:55.784445] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md 00:26:45.377 [2024-12-06 18:24:55.784455] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.12 MiB 00:26:45.377 [2024-12-06 18:24:55.784464] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:26:45.377 [2024-12-06 18:24:55.784473] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md_mirror 00:26:45.377 [2024-12-06 18:24:55.784484] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.62 MiB 00:26:45.377 [2024-12-06 18:24:55.784493] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:26:45.377 [2024-12-06 18:24:55.784512] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md 00:26:45.377 [2024-12-06 18:24:55.784521] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.88 MiB 00:26:45.377 [2024-12-06 18:24:55.784530] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:26:45.377 [2024-12-06 18:24:55.784541] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md_mirror 00:26:45.377 [2024-12-06 18:24:55.784550] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 114.00 MiB 00:26:45.377 [2024-12-06 18:24:55.784560] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:26:45.377 [2024-12-06 18:24:55.784569] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l0 00:26:45.377 [2024-12-06 18:24:55.784578] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 81.12 MiB 00:26:45.377 [2024-12-06 18:24:55.784587] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:26:45.377 [2024-12-06 18:24:55.784596] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l1 00:26:45.377 [2024-12-06 18:24:55.784605] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 89.12 MiB 00:26:45.377 [2024-12-06 18:24:55.784614] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:26:45.377 [2024-12-06 18:24:55.784624] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l2 00:26:45.377 [2024-12-06 18:24:55.784633] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 97.12 MiB 00:26:45.377 [2024-12-06 18:24:55.784642] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:26:45.377 [2024-12-06 18:24:55.784651] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l3 00:26:45.377 [2024-12-06 18:24:55.784660] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 105.12 MiB 00:26:45.377 [2024-12-06 18:24:55.784668] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:26:45.377 [2024-12-06 18:24:55.784677] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md 00:26:45.377 [2024-12-06 18:24:55.784686] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.12 MiB 00:26:45.377 [2024-12-06 18:24:55.784694] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:26:45.377 [2024-12-06 18:24:55.784703] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md_mirror 00:26:45.377 [2024-12-06 18:24:55.784712] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.38 MiB 00:26:45.377 [2024-12-06 18:24:55.784721] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:26:45.377 [2024-12-06 18:24:55.784730] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log 00:26:45.377 [2024-12-06 18:24:55.784739] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.62 MiB 00:26:45.377 [2024-12-06 18:24:55.784747] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:26:45.377 [2024-12-06 18:24:55.784756] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log_mirror 00:26:45.377 [2024-12-06 18:24:55.784765] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.75 MiB 00:26:45.377 [2024-12-06 18:24:55.784773] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:26:45.377 [2024-12-06 18:24:55.784783] ftl_layout.c: 775:ftl_layout_dump: *NOTICE*: [FTL][ftl0] Base device layout: 00:26:45.377 [2024-12-06 18:24:55.784794] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb_mirror 00:26:45.377 [2024-12-06 18:24:55.784803] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:26:45.377 [2024-12-06 18:24:55.784813] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:26:45.377 [2024-12-06 18:24:55.784823] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region vmap 00:26:45.377 [2024-12-06 18:24:55.784832] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 102400.25 MiB 00:26:45.377 [2024-12-06 18:24:55.784841] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 3.38 MiB 00:26:45.377 [2024-12-06 18:24:55.784850] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region data_btm 00:26:45.377 [2024-12-06 18:24:55.784859] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.25 MiB 00:26:45.377 [2024-12-06 18:24:55.784868] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 102400.00 MiB 00:26:45.377 [2024-12-06 18:24:55.784878] upgrade/ftl_sb_v5.c: 408:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - nvc: 00:26:45.377 [2024-12-06 18:24:55.784889] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x0 ver:5 blk_offs:0x0 blk_sz:0x20 00:26:45.377 [2024-12-06 18:24:55.784904] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x2 ver:0 blk_offs:0x20 blk_sz:0x5000 00:26:45.377 [2024-12-06 18:24:55.784914] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x3 ver:2 blk_offs:0x5020 blk_sz:0x80 00:26:45.377 [2024-12-06 18:24:55.784925] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x4 ver:2 blk_offs:0x50a0 blk_sz:0x80 00:26:45.377 [2024-12-06 18:24:55.784934] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xa ver:2 blk_offs:0x5120 blk_sz:0x800 00:26:45.377 [2024-12-06 18:24:55.784945] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xb ver:2 blk_offs:0x5920 blk_sz:0x800 00:26:45.377 [2024-12-06 18:24:55.784955] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xc ver:2 blk_offs:0x6120 blk_sz:0x800 00:26:45.377 [2024-12-06 18:24:55.784965] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xd ver:2 blk_offs:0x6920 blk_sz:0x800 00:26:45.377 [2024-12-06 18:24:55.784975] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xe ver:0 blk_offs:0x7120 blk_sz:0x40 00:26:45.377 [2024-12-06 18:24:55.784985] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xf ver:0 blk_offs:0x7160 blk_sz:0x40 00:26:45.377 [2024-12-06 18:24:55.784995] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x10 ver:1 blk_offs:0x71a0 blk_sz:0x20 00:26:45.377 [2024-12-06 18:24:55.785006] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x11 ver:1 blk_offs:0x71c0 blk_sz:0x20 00:26:45.377 [2024-12-06 18:24:55.785016] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x6 ver:2 blk_offs:0x71e0 blk_sz:0x20 00:26:45.377 [2024-12-06 18:24:55.785026] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x7 ver:2 blk_offs:0x7200 blk_sz:0x20 00:26:45.377 [2024-12-06 18:24:55.785037] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x7220 blk_sz:0x13c0e0 00:26:45.377 [2024-12-06 18:24:55.785046] upgrade/ftl_sb_v5.c: 422:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - base dev: 00:26:45.377 [2024-12-06 18:24:55.785058] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x1 ver:5 blk_offs:0x0 blk_sz:0x20 00:26:45.377 [2024-12-06 18:24:55.785068] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x20 blk_sz:0x20 00:26:45.377 [2024-12-06 18:24:55.785078] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x9 ver:0 blk_offs:0x40 blk_sz:0x1900000 00:26:45.378 [2024-12-06 18:24:55.785088] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x5 ver:0 blk_offs:0x1900040 blk_sz:0x360 00:26:45.378 [2024-12-06 18:24:55.785098] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x19003a0 blk_sz:0x3fc60 00:26:45.378 [2024-12-06 18:24:55.785109] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:45.378 [2024-12-06 18:24:55.785120] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Layout upgrade 00:26:45.378 [2024-12-06 18:24:55.785131] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.796 ms 00:26:45.378 [2024-12-06 18:24:55.785140] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:45.378 [2024-12-06 18:24:55.821489] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:45.378 [2024-12-06 18:24:55.821700] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:26:45.378 [2024-12-06 18:24:55.821724] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 36.360 ms 00:26:45.378 [2024-12-06 18:24:55.821746] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:45.378 [2024-12-06 18:24:55.821829] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:45.378 [2024-12-06 18:24:55.821841] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize band addresses 00:26:45.378 [2024-12-06 18:24:55.821852] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.053 ms 00:26:45.378 [2024-12-06 18:24:55.821862] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:45.378 [2024-12-06 18:24:55.879174] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:45.378 [2024-12-06 18:24:55.879215] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:26:45.378 [2024-12-06 18:24:55.879229] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 57.335 ms 00:26:45.378 [2024-12-06 18:24:55.879241] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:45.378 [2024-12-06 18:24:55.879295] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:45.378 [2024-12-06 18:24:55.879308] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:26:45.378 [2024-12-06 18:24:55.879323] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.003 ms 00:26:45.378 [2024-12-06 18:24:55.879333] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:45.378 [2024-12-06 18:24:55.879828] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:45.378 [2024-12-06 18:24:55.879847] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:26:45.378 [2024-12-06 18:24:55.879859] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.405 ms 00:26:45.378 [2024-12-06 18:24:55.879869] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:45.378 [2024-12-06 18:24:55.879986] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:45.378 [2024-12-06 18:24:55.880000] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:26:45.378 [2024-12-06 18:24:55.880016] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.096 ms 00:26:45.378 [2024-12-06 18:24:55.880026] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:45.378 [2024-12-06 18:24:55.897347] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:45.378 [2024-12-06 18:24:55.897387] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:26:45.378 [2024-12-06 18:24:55.897401] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 17.327 ms 00:26:45.378 [2024-12-06 18:24:55.897412] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:45.378 [2024-12-06 18:24:55.915759] ftl_nv_cache.c:1772:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: full chunks = 2, empty chunks = 2 00:26:45.378 [2024-12-06 18:24:55.915799] ftl_nv_cache.c:1776:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: state loaded successfully 00:26:45.378 [2024-12-06 18:24:55.915815] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:45.378 [2024-12-06 18:24:55.915826] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore NV cache metadata 00:26:45.378 [2024-12-06 18:24:55.915838] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 18.323 ms 00:26:45.378 [2024-12-06 18:24:55.915847] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:45.378 [2024-12-06 18:24:55.945805] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:45.378 [2024-12-06 18:24:55.945847] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore valid map metadata 00:26:45.378 [2024-12-06 18:24:55.945862] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 29.965 ms 00:26:45.378 [2024-12-06 18:24:55.945872] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:45.636 [2024-12-06 18:24:55.964567] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:45.636 [2024-12-06 18:24:55.964612] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore band info metadata 00:26:45.636 [2024-12-06 18:24:55.964625] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 18.670 ms 00:26:45.636 [2024-12-06 18:24:55.964635] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:45.636 [2024-12-06 18:24:55.983219] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:45.636 [2024-12-06 18:24:55.983260] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore trim metadata 00:26:45.636 [2024-12-06 18:24:55.983286] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 18.573 ms 00:26:45.636 [2024-12-06 18:24:55.983295] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:45.636 [2024-12-06 18:24:55.984049] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:45.636 [2024-12-06 18:24:55.984081] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize P2L checkpointing 00:26:45.636 [2024-12-06 18:24:55.984098] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.636 ms 00:26:45.636 [2024-12-06 18:24:55.984108] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:45.636 [2024-12-06 18:24:56.070780] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:45.636 [2024-12-06 18:24:56.070845] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore P2L checkpoints 00:26:45.636 [2024-12-06 18:24:56.070867] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 86.788 ms 00:26:45.636 [2024-12-06 18:24:56.070878] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:45.636 [2024-12-06 18:24:56.082293] ftl_l2p_cache.c: 458:ftl_l2p_cache_init: *NOTICE*: l2p maximum resident size is: 9 (of 10) MiB 00:26:45.636 [2024-12-06 18:24:56.085623] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:45.636 [2024-12-06 18:24:56.085655] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize L2P 00:26:45.636 [2024-12-06 18:24:56.085671] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 14.698 ms 00:26:45.636 [2024-12-06 18:24:56.085682] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:45.636 [2024-12-06 18:24:56.085786] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:45.636 [2024-12-06 18:24:56.085799] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore L2P 00:26:45.636 [2024-12-06 18:24:56.085815] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.006 ms 00:26:45.636 [2024-12-06 18:24:56.085825] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:45.636 [2024-12-06 18:24:56.085916] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:45.636 [2024-12-06 18:24:56.085929] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize band initialization 00:26:45.636 [2024-12-06 18:24:56.085939] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.029 ms 00:26:45.636 [2024-12-06 18:24:56.085949] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:45.636 [2024-12-06 18:24:56.085973] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:45.636 [2024-12-06 18:24:56.085984] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Start core poller 00:26:45.636 [2024-12-06 18:24:56.085994] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.005 ms 00:26:45.636 [2024-12-06 18:24:56.086004] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:45.636 [2024-12-06 18:24:56.086038] mngt/ftl_mngt_self_test.c: 208:ftl_mngt_self_test: *NOTICE*: [FTL][ftl0] Self test skipped 00:26:45.636 [2024-12-06 18:24:56.086050] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:45.636 [2024-12-06 18:24:56.086060] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Self test on startup 00:26:45.636 [2024-12-06 18:24:56.086071] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.013 ms 00:26:45.636 [2024-12-06 18:24:56.086081] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:45.636 [2024-12-06 18:24:56.122662] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:45.636 [2024-12-06 18:24:56.122705] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL dirty state 00:26:45.636 [2024-12-06 18:24:56.122726] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 36.617 ms 00:26:45.636 [2024-12-06 18:24:56.122736] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:45.636 [2024-12-06 18:24:56.122808] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:45.636 [2024-12-06 18:24:56.122820] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize initialization 00:26:45.636 [2024-12-06 18:24:56.122832] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.032 ms 00:26:45.636 [2024-12-06 18:24:56.122842] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:45.636 [2024-12-06 18:24:56.123905] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL startup', duration = 373.832 ms, result 0 00:26:47.013  [2024-12-06T18:24:58.520Z] Copying: 30/1024 [MB] (30 MBps) [2024-12-06T18:24:59.452Z] Copying: 58/1024 [MB] (28 MBps) [2024-12-06T18:25:00.385Z] Copying: 86/1024 [MB] (27 MBps) [2024-12-06T18:25:01.760Z] Copying: 113/1024 [MB] (26 MBps) [2024-12-06T18:25:02.697Z] Copying: 139/1024 [MB] (26 MBps) [2024-12-06T18:25:03.635Z] Copying: 166/1024 [MB] (26 MBps) [2024-12-06T18:25:04.572Z] Copying: 194/1024 [MB] (27 MBps) [2024-12-06T18:25:05.508Z] Copying: 222/1024 [MB] (27 MBps) [2024-12-06T18:25:06.442Z] Copying: 250/1024 [MB] (28 MBps) [2024-12-06T18:25:07.379Z] Copying: 277/1024 [MB] (26 MBps) [2024-12-06T18:25:08.768Z] Copying: 304/1024 [MB] (27 MBps) [2024-12-06T18:25:09.336Z] Copying: 332/1024 [MB] (28 MBps) [2024-12-06T18:25:10.715Z] Copying: 359/1024 [MB] (27 MBps) [2024-12-06T18:25:11.653Z] Copying: 386/1024 [MB] (26 MBps) [2024-12-06T18:25:12.588Z] Copying: 412/1024 [MB] (26 MBps) [2024-12-06T18:25:13.525Z] Copying: 439/1024 [MB] (27 MBps) [2024-12-06T18:25:14.460Z] Copying: 467/1024 [MB] (27 MBps) [2024-12-06T18:25:15.394Z] Copying: 495/1024 [MB] (28 MBps) [2024-12-06T18:25:16.384Z] Copying: 525/1024 [MB] (29 MBps) [2024-12-06T18:25:17.320Z] Copying: 554/1024 [MB] (29 MBps) [2024-12-06T18:25:18.695Z] Copying: 584/1024 [MB] (29 MBps) [2024-12-06T18:25:19.631Z] Copying: 613/1024 [MB] (29 MBps) [2024-12-06T18:25:20.569Z] Copying: 642/1024 [MB] (28 MBps) [2024-12-06T18:25:21.507Z] Copying: 671/1024 [MB] (28 MBps) [2024-12-06T18:25:22.477Z] Copying: 699/1024 [MB] (27 MBps) [2024-12-06T18:25:23.414Z] Copying: 727/1024 [MB] (28 MBps) [2024-12-06T18:25:24.374Z] Copying: 756/1024 [MB] (29 MBps) [2024-12-06T18:25:25.309Z] Copying: 785/1024 [MB] (28 MBps) [2024-12-06T18:25:26.685Z] Copying: 815/1024 [MB] (29 MBps) [2024-12-06T18:25:27.315Z] Copying: 843/1024 [MB] (28 MBps) [2024-12-06T18:25:28.693Z] Copying: 871/1024 [MB] (28 MBps) [2024-12-06T18:25:29.629Z] Copying: 900/1024 [MB] (28 MBps) [2024-12-06T18:25:30.564Z] Copying: 928/1024 [MB] (28 MBps) [2024-12-06T18:25:31.498Z] Copying: 956/1024 [MB] (28 MBps) [2024-12-06T18:25:32.435Z] Copying: 986/1024 [MB] (30 MBps) [2024-12-06T18:25:32.694Z] Copying: 1015/1024 [MB] (29 MBps) [2024-12-06T18:25:32.694Z] Copying: 1024/1024 [MB] (average 28 MBps)[2024-12-06 18:25:32.570559] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:22.118 [2024-12-06 18:25:32.570621] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinit core IO channel 00:27:22.118 [2024-12-06 18:25:32.570639] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.003 ms 00:27:22.118 [2024-12-06 18:25:32.570651] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:22.118 [2024-12-06 18:25:32.570676] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:27:22.118 [2024-12-06 18:25:32.575374] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:22.118 [2024-12-06 18:25:32.575428] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Unregister IO device 00:27:22.118 [2024-12-06 18:25:32.575443] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.685 ms 00:27:22.118 [2024-12-06 18:25:32.575455] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:22.118 [2024-12-06 18:25:32.575670] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:22.118 [2024-12-06 18:25:32.575684] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Stop core poller 00:27:22.118 [2024-12-06 18:25:32.575708] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.179 ms 00:27:22.118 [2024-12-06 18:25:32.575718] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:22.118 [2024-12-06 18:25:32.578816] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:22.118 [2024-12-06 18:25:32.578848] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist L2P 00:27:22.118 [2024-12-06 18:25:32.578860] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.087 ms 00:27:22.118 [2024-12-06 18:25:32.578876] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:22.118 [2024-12-06 18:25:32.584580] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:22.118 [2024-12-06 18:25:32.585030] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finish L2P trims 00:27:22.118 [2024-12-06 18:25:32.585045] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 5.690 ms 00:27:22.118 [2024-12-06 18:25:32.585055] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:22.118 [2024-12-06 18:25:32.622560] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:22.118 [2024-12-06 18:25:32.622799] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist NV cache metadata 00:27:22.118 [2024-12-06 18:25:32.622825] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 37.427 ms 00:27:22.118 [2024-12-06 18:25:32.622837] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:22.118 [2024-12-06 18:25:32.645011] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:22.118 [2024-12-06 18:25:32.645083] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist valid map metadata 00:27:22.118 [2024-12-06 18:25:32.645101] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 22.147 ms 00:27:22.118 [2024-12-06 18:25:32.645112] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:22.118 [2024-12-06 18:25:32.645291] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:22.118 [2024-12-06 18:25:32.645306] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist P2L metadata 00:27:22.118 [2024-12-06 18:25:32.645317] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.116 ms 00:27:22.118 [2024-12-06 18:25:32.645334] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:22.118 [2024-12-06 18:25:32.683302] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:22.118 [2024-12-06 18:25:32.683363] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist band info metadata 00:27:22.118 [2024-12-06 18:25:32.683380] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 38.009 ms 00:27:22.118 [2024-12-06 18:25:32.683390] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:22.378 [2024-12-06 18:25:32.721337] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:22.378 [2024-12-06 18:25:32.721392] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist trim metadata 00:27:22.378 [2024-12-06 18:25:32.721409] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 37.952 ms 00:27:22.378 [2024-12-06 18:25:32.721420] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:22.378 [2024-12-06 18:25:32.759577] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:22.378 [2024-12-06 18:25:32.759792] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist superblock 00:27:22.378 [2024-12-06 18:25:32.759818] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 38.159 ms 00:27:22.378 [2024-12-06 18:25:32.759828] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:22.378 [2024-12-06 18:25:32.797674] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:22.379 [2024-12-06 18:25:32.797735] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL clean state 00:27:22.379 [2024-12-06 18:25:32.797751] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 37.793 ms 00:27:22.379 [2024-12-06 18:25:32.797763] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:22.379 [2024-12-06 18:25:32.797822] ftl_debug.c: 165:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Bands validity: 00:27:22.379 [2024-12-06 18:25:32.797849] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 1: 0 / 261120 wr_cnt: 0 state: free 00:27:22.379 [2024-12-06 18:25:32.797866] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 2: 0 / 261120 wr_cnt: 0 state: free 00:27:22.379 [2024-12-06 18:25:32.797882] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 3: 0 / 261120 wr_cnt: 0 state: free 00:27:22.379 [2024-12-06 18:25:32.797893] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 4: 0 / 261120 wr_cnt: 0 state: free 00:27:22.379 [2024-12-06 18:25:32.797904] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 5: 0 / 261120 wr_cnt: 0 state: free 00:27:22.379 [2024-12-06 18:25:32.797915] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 6: 0 / 261120 wr_cnt: 0 state: free 00:27:22.379 [2024-12-06 18:25:32.797926] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 7: 0 / 261120 wr_cnt: 0 state: free 00:27:22.379 [2024-12-06 18:25:32.797936] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 8: 0 / 261120 wr_cnt: 0 state: free 00:27:22.379 [2024-12-06 18:25:32.797947] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 9: 0 / 261120 wr_cnt: 0 state: free 00:27:22.379 [2024-12-06 18:25:32.797957] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 10: 0 / 261120 wr_cnt: 0 state: free 00:27:22.379 [2024-12-06 18:25:32.797968] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 11: 0 / 261120 wr_cnt: 0 state: free 00:27:22.379 [2024-12-06 18:25:32.797978] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 12: 0 / 261120 wr_cnt: 0 state: free 00:27:22.379 [2024-12-06 18:25:32.797988] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 13: 0 / 261120 wr_cnt: 0 state: free 00:27:22.379 [2024-12-06 18:25:32.797998] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 14: 0 / 261120 wr_cnt: 0 state: free 00:27:22.379 [2024-12-06 18:25:32.798009] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 15: 0 / 261120 wr_cnt: 0 state: free 00:27:22.379 [2024-12-06 18:25:32.798019] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 16: 0 / 261120 wr_cnt: 0 state: free 00:27:22.379 [2024-12-06 18:25:32.798030] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 17: 0 / 261120 wr_cnt: 0 state: free 00:27:22.379 [2024-12-06 18:25:32.798041] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 18: 0 / 261120 wr_cnt: 0 state: free 00:27:22.379 [2024-12-06 18:25:32.798052] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 19: 0 / 261120 wr_cnt: 0 state: free 00:27:22.379 [2024-12-06 18:25:32.798063] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 20: 0 / 261120 wr_cnt: 0 state: free 00:27:22.379 [2024-12-06 18:25:32.798073] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 21: 0 / 261120 wr_cnt: 0 state: free 00:27:22.379 [2024-12-06 18:25:32.798083] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 22: 0 / 261120 wr_cnt: 0 state: free 00:27:22.379 [2024-12-06 18:25:32.798093] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 23: 0 / 261120 wr_cnt: 0 state: free 00:27:22.379 [2024-12-06 18:25:32.798103] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 24: 0 / 261120 wr_cnt: 0 state: free 00:27:22.379 [2024-12-06 18:25:32.798114] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 25: 0 / 261120 wr_cnt: 0 state: free 00:27:22.379 [2024-12-06 18:25:32.798124] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 26: 0 / 261120 wr_cnt: 0 state: free 00:27:22.379 [2024-12-06 18:25:32.798136] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 27: 0 / 261120 wr_cnt: 0 state: free 00:27:22.379 [2024-12-06 18:25:32.798146] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 28: 0 / 261120 wr_cnt: 0 state: free 00:27:22.379 [2024-12-06 18:25:32.798156] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 29: 0 / 261120 wr_cnt: 0 state: free 00:27:22.379 [2024-12-06 18:25:32.798167] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 30: 0 / 261120 wr_cnt: 0 state: free 00:27:22.379 [2024-12-06 18:25:32.798178] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 31: 0 / 261120 wr_cnt: 0 state: free 00:27:22.379 [2024-12-06 18:25:32.798188] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 32: 0 / 261120 wr_cnt: 0 state: free 00:27:22.379 [2024-12-06 18:25:32.798199] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 33: 0 / 261120 wr_cnt: 0 state: free 00:27:22.379 [2024-12-06 18:25:32.798210] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 34: 0 / 261120 wr_cnt: 0 state: free 00:27:22.379 [2024-12-06 18:25:32.798221] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 35: 0 / 261120 wr_cnt: 0 state: free 00:27:22.379 [2024-12-06 18:25:32.798231] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 36: 0 / 261120 wr_cnt: 0 state: free 00:27:22.379 [2024-12-06 18:25:32.798242] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 37: 0 / 261120 wr_cnt: 0 state: free 00:27:22.379 [2024-12-06 18:25:32.798252] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 38: 0 / 261120 wr_cnt: 0 state: free 00:27:22.379 [2024-12-06 18:25:32.798263] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 39: 0 / 261120 wr_cnt: 0 state: free 00:27:22.379 [2024-12-06 18:25:32.798290] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 40: 0 / 261120 wr_cnt: 0 state: free 00:27:22.379 [2024-12-06 18:25:32.798300] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 41: 0 / 261120 wr_cnt: 0 state: free 00:27:22.379 [2024-12-06 18:25:32.798311] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 42: 0 / 261120 wr_cnt: 0 state: free 00:27:22.379 [2024-12-06 18:25:32.798321] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 43: 0 / 261120 wr_cnt: 0 state: free 00:27:22.379 [2024-12-06 18:25:32.798332] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 44: 0 / 261120 wr_cnt: 0 state: free 00:27:22.379 [2024-12-06 18:25:32.798342] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 45: 0 / 261120 wr_cnt: 0 state: free 00:27:22.379 [2024-12-06 18:25:32.798353] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 46: 0 / 261120 wr_cnt: 0 state: free 00:27:22.379 [2024-12-06 18:25:32.798363] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 47: 0 / 261120 wr_cnt: 0 state: free 00:27:22.379 [2024-12-06 18:25:32.798374] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 48: 0 / 261120 wr_cnt: 0 state: free 00:27:22.379 [2024-12-06 18:25:32.798426] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 49: 0 / 261120 wr_cnt: 0 state: free 00:27:22.379 [2024-12-06 18:25:32.798437] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 50: 0 / 261120 wr_cnt: 0 state: free 00:27:22.379 [2024-12-06 18:25:32.798448] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 51: 0 / 261120 wr_cnt: 0 state: free 00:27:22.379 [2024-12-06 18:25:32.798459] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 52: 0 / 261120 wr_cnt: 0 state: free 00:27:22.379 [2024-12-06 18:25:32.798470] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 53: 0 / 261120 wr_cnt: 0 state: free 00:27:22.379 [2024-12-06 18:25:32.798480] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 54: 0 / 261120 wr_cnt: 0 state: free 00:27:22.379 [2024-12-06 18:25:32.798491] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 55: 0 / 261120 wr_cnt: 0 state: free 00:27:22.379 [2024-12-06 18:25:32.798502] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 56: 0 / 261120 wr_cnt: 0 state: free 00:27:22.379 [2024-12-06 18:25:32.798513] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 57: 0 / 261120 wr_cnt: 0 state: free 00:27:22.379 [2024-12-06 18:25:32.798524] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 58: 0 / 261120 wr_cnt: 0 state: free 00:27:22.379 [2024-12-06 18:25:32.798535] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 59: 0 / 261120 wr_cnt: 0 state: free 00:27:22.379 [2024-12-06 18:25:32.798546] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 60: 0 / 261120 wr_cnt: 0 state: free 00:27:22.379 [2024-12-06 18:25:32.798557] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 61: 0 / 261120 wr_cnt: 0 state: free 00:27:22.379 [2024-12-06 18:25:32.798568] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 62: 0 / 261120 wr_cnt: 0 state: free 00:27:22.379 [2024-12-06 18:25:32.798579] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 63: 0 / 261120 wr_cnt: 0 state: free 00:27:22.379 [2024-12-06 18:25:32.798597] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 64: 0 / 261120 wr_cnt: 0 state: free 00:27:22.379 [2024-12-06 18:25:32.798607] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 65: 0 / 261120 wr_cnt: 0 state: free 00:27:22.379 [2024-12-06 18:25:32.798618] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 66: 0 / 261120 wr_cnt: 0 state: free 00:27:22.379 [2024-12-06 18:25:32.798628] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 67: 0 / 261120 wr_cnt: 0 state: free 00:27:22.379 [2024-12-06 18:25:32.798639] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 68: 0 / 261120 wr_cnt: 0 state: free 00:27:22.379 [2024-12-06 18:25:32.798649] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 69: 0 / 261120 wr_cnt: 0 state: free 00:27:22.379 [2024-12-06 18:25:32.798660] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 70: 0 / 261120 wr_cnt: 0 state: free 00:27:22.379 [2024-12-06 18:25:32.798670] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 71: 0 / 261120 wr_cnt: 0 state: free 00:27:22.379 [2024-12-06 18:25:32.798681] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 72: 0 / 261120 wr_cnt: 0 state: free 00:27:22.379 [2024-12-06 18:25:32.798691] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 73: 0 / 261120 wr_cnt: 0 state: free 00:27:22.379 [2024-12-06 18:25:32.798701] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 74: 0 / 261120 wr_cnt: 0 state: free 00:27:22.379 [2024-12-06 18:25:32.798711] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 75: 0 / 261120 wr_cnt: 0 state: free 00:27:22.379 [2024-12-06 18:25:32.798721] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 76: 0 / 261120 wr_cnt: 0 state: free 00:27:22.379 [2024-12-06 18:25:32.798731] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 77: 0 / 261120 wr_cnt: 0 state: free 00:27:22.379 [2024-12-06 18:25:32.798741] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 78: 0 / 261120 wr_cnt: 0 state: free 00:27:22.379 [2024-12-06 18:25:32.798751] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 79: 0 / 261120 wr_cnt: 0 state: free 00:27:22.379 [2024-12-06 18:25:32.798761] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 80: 0 / 261120 wr_cnt: 0 state: free 00:27:22.379 [2024-12-06 18:25:32.798771] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 81: 0 / 261120 wr_cnt: 0 state: free 00:27:22.379 [2024-12-06 18:25:32.798781] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 82: 0 / 261120 wr_cnt: 0 state: free 00:27:22.379 [2024-12-06 18:25:32.798791] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 83: 0 / 261120 wr_cnt: 0 state: free 00:27:22.379 [2024-12-06 18:25:32.798801] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 84: 0 / 261120 wr_cnt: 0 state: free 00:27:22.380 [2024-12-06 18:25:32.798811] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 85: 0 / 261120 wr_cnt: 0 state: free 00:27:22.380 [2024-12-06 18:25:32.798821] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 86: 0 / 261120 wr_cnt: 0 state: free 00:27:22.380 [2024-12-06 18:25:32.798831] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 87: 0 / 261120 wr_cnt: 0 state: free 00:27:22.380 [2024-12-06 18:25:32.798841] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 88: 0 / 261120 wr_cnt: 0 state: free 00:27:22.380 [2024-12-06 18:25:32.798852] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 89: 0 / 261120 wr_cnt: 0 state: free 00:27:22.380 [2024-12-06 18:25:32.798863] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 90: 0 / 261120 wr_cnt: 0 state: free 00:27:22.380 [2024-12-06 18:25:32.798873] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 91: 0 / 261120 wr_cnt: 0 state: free 00:27:22.380 [2024-12-06 18:25:32.798883] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 92: 0 / 261120 wr_cnt: 0 state: free 00:27:22.380 [2024-12-06 18:25:32.798893] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 93: 0 / 261120 wr_cnt: 0 state: free 00:27:22.380 [2024-12-06 18:25:32.798903] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 94: 0 / 261120 wr_cnt: 0 state: free 00:27:22.380 [2024-12-06 18:25:32.798913] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 95: 0 / 261120 wr_cnt: 0 state: free 00:27:22.380 [2024-12-06 18:25:32.798924] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 96: 0 / 261120 wr_cnt: 0 state: free 00:27:22.380 [2024-12-06 18:25:32.798934] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 97: 0 / 261120 wr_cnt: 0 state: free 00:27:22.380 [2024-12-06 18:25:32.798945] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 98: 0 / 261120 wr_cnt: 0 state: free 00:27:22.380 [2024-12-06 18:25:32.798955] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 99: 0 / 261120 wr_cnt: 0 state: free 00:27:22.380 [2024-12-06 18:25:32.798966] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 100: 0 / 261120 wr_cnt: 0 state: free 00:27:22.380 [2024-12-06 18:25:32.798983] ftl_debug.c: 211:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] 00:27:22.380 [2024-12-06 18:25:32.798993] ftl_debug.c: 212:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] device UUID: 246e93d4-98de-4402-8d08-9ff86994df11 00:27:22.380 [2024-12-06 18:25:32.799004] ftl_debug.c: 213:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total valid LBAs: 0 00:27:22.380 [2024-12-06 18:25:32.799014] ftl_debug.c: 214:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total writes: 960 00:27:22.380 [2024-12-06 18:25:32.799024] ftl_debug.c: 215:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] user writes: 0 00:27:22.380 [2024-12-06 18:25:32.799035] ftl_debug.c: 216:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] WAF: inf 00:27:22.380 [2024-12-06 18:25:32.799056] ftl_debug.c: 218:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] limits: 00:27:22.380 [2024-12-06 18:25:32.799066] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] crit: 0 00:27:22.380 [2024-12-06 18:25:32.799076] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] high: 0 00:27:22.380 [2024-12-06 18:25:32.799085] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] low: 0 00:27:22.380 [2024-12-06 18:25:32.799094] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] start: 0 00:27:22.380 [2024-12-06 18:25:32.799104] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:22.380 [2024-12-06 18:25:32.799114] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Dump statistics 00:27:22.380 [2024-12-06 18:25:32.799125] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.285 ms 00:27:22.380 [2024-12-06 18:25:32.799138] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:22.380 [2024-12-06 18:25:32.819500] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:22.380 [2024-12-06 18:25:32.819555] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize L2P 00:27:22.380 [2024-12-06 18:25:32.819570] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 20.344 ms 00:27:22.380 [2024-12-06 18:25:32.819582] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:22.380 [2024-12-06 18:25:32.820109] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:22.380 [2024-12-06 18:25:32.820128] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize P2L checkpointing 00:27:22.380 [2024-12-06 18:25:32.820147] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.493 ms 00:27:22.380 [2024-12-06 18:25:32.820158] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:22.380 [2024-12-06 18:25:32.871575] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:27:22.380 [2024-12-06 18:25:32.871640] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:27:22.380 [2024-12-06 18:25:32.871655] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:27:22.380 [2024-12-06 18:25:32.871666] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:22.380 [2024-12-06 18:25:32.871746] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:27:22.380 [2024-12-06 18:25:32.871757] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:27:22.380 [2024-12-06 18:25:32.871775] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:27:22.380 [2024-12-06 18:25:32.871785] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:22.380 [2024-12-06 18:25:32.871861] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:27:22.380 [2024-12-06 18:25:32.871874] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:27:22.380 [2024-12-06 18:25:32.871884] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:27:22.380 [2024-12-06 18:25:32.871894] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:22.380 [2024-12-06 18:25:32.871911] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:27:22.380 [2024-12-06 18:25:32.871922] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:27:22.380 [2024-12-06 18:25:32.871932] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:27:22.380 [2024-12-06 18:25:32.871947] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:22.639 [2024-12-06 18:25:32.997099] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:27:22.639 [2024-12-06 18:25:32.997169] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:27:22.639 [2024-12-06 18:25:32.997186] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:27:22.639 [2024-12-06 18:25:32.997197] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:22.639 [2024-12-06 18:25:33.102101] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:27:22.639 [2024-12-06 18:25:33.102170] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:27:22.639 [2024-12-06 18:25:33.102195] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:27:22.639 [2024-12-06 18:25:33.102206] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:22.639 [2024-12-06 18:25:33.102318] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:27:22.639 [2024-12-06 18:25:33.102332] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:27:22.639 [2024-12-06 18:25:33.102343] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:27:22.639 [2024-12-06 18:25:33.102353] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:22.639 [2024-12-06 18:25:33.102410] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:27:22.639 [2024-12-06 18:25:33.102423] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:27:22.639 [2024-12-06 18:25:33.102434] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:27:22.639 [2024-12-06 18:25:33.102444] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:22.639 [2024-12-06 18:25:33.102560] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:27:22.639 [2024-12-06 18:25:33.102574] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:27:22.639 [2024-12-06 18:25:33.102584] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:27:22.639 [2024-12-06 18:25:33.102594] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:22.639 [2024-12-06 18:25:33.102629] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:27:22.639 [2024-12-06 18:25:33.102641] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize superblock 00:27:22.639 [2024-12-06 18:25:33.102651] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:27:22.639 [2024-12-06 18:25:33.102661] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:22.639 [2024-12-06 18:25:33.102703] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:27:22.639 [2024-12-06 18:25:33.102715] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:27:22.639 [2024-12-06 18:25:33.102725] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:27:22.639 [2024-12-06 18:25:33.102735] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:22.639 [2024-12-06 18:25:33.102777] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:27:22.639 [2024-12-06 18:25:33.102789] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:27:22.639 [2024-12-06 18:25:33.102799] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:27:22.640 [2024-12-06 18:25:33.102809] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:22.640 [2024-12-06 18:25:33.102925] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL shutdown', duration = 533.202 ms, result 0 00:27:23.574 00:27:23.574 00:27:23.832 18:25:34 ftl.ftl_restore -- ftl/restore.sh@76 -- # md5sum -c /home/vagrant/spdk_repo/spdk/test/ftl/testfile.md5 00:27:25.732 /home/vagrant/spdk_repo/spdk/test/ftl/testfile: OK 00:27:25.732 18:25:35 ftl.ftl_restore -- ftl/restore.sh@79 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/ftl/testfile --ob=ftl0 --json=/home/vagrant/spdk_repo/spdk/test/ftl/config/ftl.json --seek=131072 00:27:25.732 [2024-12-06 18:25:35.985379] Starting SPDK v25.01-pre git sha1 1148849d6 / DPDK 24.03.0 initialization... 00:27:25.732 [2024-12-06 18:25:35.985509] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid80081 ] 00:27:25.732 [2024-12-06 18:25:36.165817] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:27:25.732 [2024-12-06 18:25:36.283229] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:27:26.297 [2024-12-06 18:25:36.622898] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:27:26.297 [2024-12-06 18:25:36.622977] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:27:26.297 [2024-12-06 18:25:36.783322] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:26.297 [2024-12-06 18:25:36.783386] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Check configuration 00:27:26.297 [2024-12-06 18:25:36.783403] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.004 ms 00:27:26.297 [2024-12-06 18:25:36.783414] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:26.297 [2024-12-06 18:25:36.783470] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:26.297 [2024-12-06 18:25:36.783485] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:27:26.297 [2024-12-06 18:25:36.783496] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.036 ms 00:27:26.297 [2024-12-06 18:25:36.783506] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:26.297 [2024-12-06 18:25:36.783528] mngt/ftl_mngt_bdev.c: 196:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using nvc0n1p0 as write buffer cache 00:27:26.297 [2024-12-06 18:25:36.784547] mngt/ftl_mngt_bdev.c: 236:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using bdev as NV Cache device 00:27:26.297 [2024-12-06 18:25:36.784573] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:26.297 [2024-12-06 18:25:36.784584] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:27:26.297 [2024-12-06 18:25:36.784596] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.052 ms 00:27:26.297 [2024-12-06 18:25:36.784606] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:26.297 [2024-12-06 18:25:36.786012] mngt/ftl_mngt_md.c: 455:ftl_mngt_load_sb: *NOTICE*: [FTL][ftl0] SHM: clean 0, shm_clean 0 00:27:26.297 [2024-12-06 18:25:36.805648] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:26.297 [2024-12-06 18:25:36.805718] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Load super block 00:27:26.297 [2024-12-06 18:25:36.805735] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 19.665 ms 00:27:26.297 [2024-12-06 18:25:36.805746] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:26.297 [2024-12-06 18:25:36.805849] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:26.297 [2024-12-06 18:25:36.805862] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Validate super block 00:27:26.297 [2024-12-06 18:25:36.805874] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.033 ms 00:27:26.297 [2024-12-06 18:25:36.805884] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:26.297 [2024-12-06 18:25:36.813246] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:26.297 [2024-12-06 18:25:36.813313] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:27:26.297 [2024-12-06 18:25:36.813327] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 7.287 ms 00:27:26.297 [2024-12-06 18:25:36.813344] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:26.297 [2024-12-06 18:25:36.813433] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:26.297 [2024-12-06 18:25:36.813449] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:27:26.297 [2024-12-06 18:25:36.813460] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.065 ms 00:27:26.297 [2024-12-06 18:25:36.813471] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:26.297 [2024-12-06 18:25:36.813525] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:26.297 [2024-12-06 18:25:36.813538] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Register IO device 00:27:26.297 [2024-12-06 18:25:36.813548] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.011 ms 00:27:26.297 [2024-12-06 18:25:36.813558] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:26.297 [2024-12-06 18:25:36.813589] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:27:26.297 [2024-12-06 18:25:36.818605] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:26.297 [2024-12-06 18:25:36.818650] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:27:26.297 [2024-12-06 18:25:36.818667] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 5.030 ms 00:27:26.297 [2024-12-06 18:25:36.818677] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:26.297 [2024-12-06 18:25:36.818717] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:26.297 [2024-12-06 18:25:36.818728] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Decorate bands 00:27:26.297 [2024-12-06 18:25:36.818739] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.011 ms 00:27:26.297 [2024-12-06 18:25:36.818749] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:26.297 [2024-12-06 18:25:36.818814] ftl_layout.c: 613:ftl_layout_setup: *NOTICE*: [FTL][ftl0] FTL layout setup mode 0 00:27:26.297 [2024-12-06 18:25:36.818840] upgrade/ftl_sb_v5.c: 278:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob load 0x150 bytes 00:27:26.297 [2024-12-06 18:25:36.818875] upgrade/ftl_sb_v5.c: 287:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] base layout blob load 0x48 bytes 00:27:26.297 [2024-12-06 18:25:36.818896] upgrade/ftl_sb_v5.c: 294:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] layout blob load 0x190 bytes 00:27:26.297 [2024-12-06 18:25:36.818985] upgrade/ftl_sb_v5.c: 92:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob store 0x150 bytes 00:27:26.297 [2024-12-06 18:25:36.818998] upgrade/ftl_sb_v5.c: 101:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] base layout blob store 0x48 bytes 00:27:26.297 [2024-12-06 18:25:36.819012] upgrade/ftl_sb_v5.c: 109:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] layout blob store 0x190 bytes 00:27:26.297 [2024-12-06 18:25:36.819026] ftl_layout.c: 685:ftl_layout_setup: *NOTICE*: [FTL][ftl0] Base device capacity: 103424.00 MiB 00:27:26.297 [2024-12-06 18:25:36.819037] ftl_layout.c: 687:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache device capacity: 5171.00 MiB 00:27:26.297 [2024-12-06 18:25:36.819049] ftl_layout.c: 689:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P entries: 20971520 00:27:26.297 [2024-12-06 18:25:36.819058] ftl_layout.c: 690:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P address size: 4 00:27:26.297 [2024-12-06 18:25:36.819071] ftl_layout.c: 691:ftl_layout_setup: *NOTICE*: [FTL][ftl0] P2L checkpoint pages: 2048 00:27:26.297 [2024-12-06 18:25:36.819082] ftl_layout.c: 692:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache chunk count 5 00:27:26.297 [2024-12-06 18:25:36.819092] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:26.297 [2024-12-06 18:25:36.819103] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize layout 00:27:26.297 [2024-12-06 18:25:36.819113] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.282 ms 00:27:26.297 [2024-12-06 18:25:36.819123] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:26.297 [2024-12-06 18:25:36.819197] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:26.297 [2024-12-06 18:25:36.819209] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Verify layout 00:27:26.297 [2024-12-06 18:25:36.819219] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.055 ms 00:27:26.297 [2024-12-06 18:25:36.819229] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:26.297 [2024-12-06 18:25:36.819346] ftl_layout.c: 768:ftl_layout_dump: *NOTICE*: [FTL][ftl0] NV cache layout: 00:27:26.297 [2024-12-06 18:25:36.819363] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb 00:27:26.297 [2024-12-06 18:25:36.819393] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:27:26.297 [2024-12-06 18:25:36.819403] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:27:26.297 [2024-12-06 18:25:36.819414] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region l2p 00:27:26.297 [2024-12-06 18:25:36.819424] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.12 MiB 00:27:26.297 [2024-12-06 18:25:36.819433] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 80.00 MiB 00:27:26.297 [2024-12-06 18:25:36.819444] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md 00:27:26.297 [2024-12-06 18:25:36.819454] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.12 MiB 00:27:26.297 [2024-12-06 18:25:36.819463] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:27:26.297 [2024-12-06 18:25:36.819476] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md_mirror 00:27:26.297 [2024-12-06 18:25:36.819486] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.62 MiB 00:27:26.297 [2024-12-06 18:25:36.819495] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:27:26.297 [2024-12-06 18:25:36.819515] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md 00:27:26.297 [2024-12-06 18:25:36.819524] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.88 MiB 00:27:26.297 [2024-12-06 18:25:36.819533] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:27:26.297 [2024-12-06 18:25:36.819543] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md_mirror 00:27:26.297 [2024-12-06 18:25:36.819552] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 114.00 MiB 00:27:26.297 [2024-12-06 18:25:36.819562] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:27:26.297 [2024-12-06 18:25:36.819571] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l0 00:27:26.297 [2024-12-06 18:25:36.819580] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 81.12 MiB 00:27:26.297 [2024-12-06 18:25:36.819589] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:27:26.297 [2024-12-06 18:25:36.819598] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l1 00:27:26.297 [2024-12-06 18:25:36.819607] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 89.12 MiB 00:27:26.297 [2024-12-06 18:25:36.819616] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:27:26.297 [2024-12-06 18:25:36.819625] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l2 00:27:26.297 [2024-12-06 18:25:36.819634] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 97.12 MiB 00:27:26.297 [2024-12-06 18:25:36.819644] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:27:26.297 [2024-12-06 18:25:36.819653] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l3 00:27:26.297 [2024-12-06 18:25:36.819662] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 105.12 MiB 00:27:26.297 [2024-12-06 18:25:36.819671] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:27:26.297 [2024-12-06 18:25:36.819680] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md 00:27:26.297 [2024-12-06 18:25:36.819689] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.12 MiB 00:27:26.297 [2024-12-06 18:25:36.819698] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:27:26.297 [2024-12-06 18:25:36.819707] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md_mirror 00:27:26.297 [2024-12-06 18:25:36.819716] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.38 MiB 00:27:26.297 [2024-12-06 18:25:36.819724] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:27:26.297 [2024-12-06 18:25:36.819733] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log 00:27:26.297 [2024-12-06 18:25:36.819742] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.62 MiB 00:27:26.298 [2024-12-06 18:25:36.819751] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:27:26.298 [2024-12-06 18:25:36.819759] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log_mirror 00:27:26.298 [2024-12-06 18:25:36.819768] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.75 MiB 00:27:26.298 [2024-12-06 18:25:36.819778] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:27:26.298 [2024-12-06 18:25:36.819786] ftl_layout.c: 775:ftl_layout_dump: *NOTICE*: [FTL][ftl0] Base device layout: 00:27:26.298 [2024-12-06 18:25:36.819797] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb_mirror 00:27:26.298 [2024-12-06 18:25:36.819806] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:27:26.298 [2024-12-06 18:25:36.819815] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:27:26.298 [2024-12-06 18:25:36.819825] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region vmap 00:27:26.298 [2024-12-06 18:25:36.819834] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 102400.25 MiB 00:27:26.298 [2024-12-06 18:25:36.819843] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 3.38 MiB 00:27:26.298 [2024-12-06 18:25:36.819852] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region data_btm 00:27:26.298 [2024-12-06 18:25:36.819861] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.25 MiB 00:27:26.298 [2024-12-06 18:25:36.819871] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 102400.00 MiB 00:27:26.298 [2024-12-06 18:25:36.819881] upgrade/ftl_sb_v5.c: 408:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - nvc: 00:27:26.298 [2024-12-06 18:25:36.819893] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x0 ver:5 blk_offs:0x0 blk_sz:0x20 00:27:26.298 [2024-12-06 18:25:36.819908] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x2 ver:0 blk_offs:0x20 blk_sz:0x5000 00:27:26.298 [2024-12-06 18:25:36.819919] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x3 ver:2 blk_offs:0x5020 blk_sz:0x80 00:27:26.298 [2024-12-06 18:25:36.819929] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x4 ver:2 blk_offs:0x50a0 blk_sz:0x80 00:27:26.298 [2024-12-06 18:25:36.819939] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xa ver:2 blk_offs:0x5120 blk_sz:0x800 00:27:26.298 [2024-12-06 18:25:36.819950] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xb ver:2 blk_offs:0x5920 blk_sz:0x800 00:27:26.298 [2024-12-06 18:25:36.819961] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xc ver:2 blk_offs:0x6120 blk_sz:0x800 00:27:26.298 [2024-12-06 18:25:36.819971] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xd ver:2 blk_offs:0x6920 blk_sz:0x800 00:27:26.298 [2024-12-06 18:25:36.819982] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xe ver:0 blk_offs:0x7120 blk_sz:0x40 00:27:26.298 [2024-12-06 18:25:36.819992] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xf ver:0 blk_offs:0x7160 blk_sz:0x40 00:27:26.298 [2024-12-06 18:25:36.820002] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x10 ver:1 blk_offs:0x71a0 blk_sz:0x20 00:27:26.298 [2024-12-06 18:25:36.820012] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x11 ver:1 blk_offs:0x71c0 blk_sz:0x20 00:27:26.298 [2024-12-06 18:25:36.820022] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x6 ver:2 blk_offs:0x71e0 blk_sz:0x20 00:27:26.298 [2024-12-06 18:25:36.820032] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x7 ver:2 blk_offs:0x7200 blk_sz:0x20 00:27:26.298 [2024-12-06 18:25:36.820042] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x7220 blk_sz:0x13c0e0 00:27:26.298 [2024-12-06 18:25:36.820052] upgrade/ftl_sb_v5.c: 422:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - base dev: 00:27:26.298 [2024-12-06 18:25:36.820064] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x1 ver:5 blk_offs:0x0 blk_sz:0x20 00:27:26.298 [2024-12-06 18:25:36.820075] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x20 blk_sz:0x20 00:27:26.298 [2024-12-06 18:25:36.820085] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x9 ver:0 blk_offs:0x40 blk_sz:0x1900000 00:27:26.298 [2024-12-06 18:25:36.820096] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x5 ver:0 blk_offs:0x1900040 blk_sz:0x360 00:27:26.298 [2024-12-06 18:25:36.820106] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x19003a0 blk_sz:0x3fc60 00:27:26.298 [2024-12-06 18:25:36.820117] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:26.298 [2024-12-06 18:25:36.820127] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Layout upgrade 00:27:26.298 [2024-12-06 18:25:36.820137] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.826 ms 00:27:26.298 [2024-12-06 18:25:36.820148] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:26.298 [2024-12-06 18:25:36.856967] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:26.298 [2024-12-06 18:25:36.857023] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:27:26.298 [2024-12-06 18:25:36.857039] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 36.825 ms 00:27:26.298 [2024-12-06 18:25:36.857055] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:26.298 [2024-12-06 18:25:36.857160] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:26.298 [2024-12-06 18:25:36.857171] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize band addresses 00:27:26.298 [2024-12-06 18:25:36.857182] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.061 ms 00:27:26.298 [2024-12-06 18:25:36.857193] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:26.556 [2024-12-06 18:25:36.921344] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:26.556 [2024-12-06 18:25:36.921400] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:27:26.556 [2024-12-06 18:25:36.921416] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 64.154 ms 00:27:26.556 [2024-12-06 18:25:36.921427] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:26.556 [2024-12-06 18:25:36.921488] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:26.556 [2024-12-06 18:25:36.921499] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:27:26.556 [2024-12-06 18:25:36.921514] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.003 ms 00:27:26.556 [2024-12-06 18:25:36.921524] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:26.556 [2024-12-06 18:25:36.922017] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:26.556 [2024-12-06 18:25:36.922032] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:27:26.556 [2024-12-06 18:25:36.922043] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.413 ms 00:27:26.556 [2024-12-06 18:25:36.922053] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:26.556 [2024-12-06 18:25:36.922172] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:26.556 [2024-12-06 18:25:36.922186] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:27:26.556 [2024-12-06 18:25:36.922203] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.097 ms 00:27:26.556 [2024-12-06 18:25:36.922213] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:26.556 [2024-12-06 18:25:36.940341] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:26.556 [2024-12-06 18:25:36.940582] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:27:26.556 [2024-12-06 18:25:36.940609] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 18.135 ms 00:27:26.556 [2024-12-06 18:25:36.940621] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:26.556 [2024-12-06 18:25:36.960574] ftl_nv_cache.c:1772:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: full chunks = 2, empty chunks = 2 00:27:26.556 [2024-12-06 18:25:36.960625] ftl_nv_cache.c:1776:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: state loaded successfully 00:27:26.556 [2024-12-06 18:25:36.960643] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:26.556 [2024-12-06 18:25:36.960654] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore NV cache metadata 00:27:26.556 [2024-12-06 18:25:36.960667] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 19.901 ms 00:27:26.556 [2024-12-06 18:25:36.960676] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:26.556 [2024-12-06 18:25:36.991030] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:26.556 [2024-12-06 18:25:36.991105] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore valid map metadata 00:27:26.556 [2024-12-06 18:25:36.991122] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 30.348 ms 00:27:26.556 [2024-12-06 18:25:36.991133] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:26.556 [2024-12-06 18:25:37.010710] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:26.556 [2024-12-06 18:25:37.010963] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore band info metadata 00:27:26.556 [2024-12-06 18:25:37.010989] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 19.521 ms 00:27:26.556 [2024-12-06 18:25:37.011000] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:26.556 [2024-12-06 18:25:37.030250] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:26.556 [2024-12-06 18:25:37.030319] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore trim metadata 00:27:26.556 [2024-12-06 18:25:37.030336] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 19.224 ms 00:27:26.556 [2024-12-06 18:25:37.030346] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:26.556 [2024-12-06 18:25:37.031209] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:26.556 [2024-12-06 18:25:37.031235] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize P2L checkpointing 00:27:26.556 [2024-12-06 18:25:37.031251] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.699 ms 00:27:26.556 [2024-12-06 18:25:37.031262] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:26.556 [2024-12-06 18:25:37.117328] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:26.556 [2024-12-06 18:25:37.117627] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore P2L checkpoints 00:27:26.556 [2024-12-06 18:25:37.117660] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 86.164 ms 00:27:26.556 [2024-12-06 18:25:37.117672] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:26.557 [2024-12-06 18:25:37.130298] ftl_l2p_cache.c: 458:ftl_l2p_cache_init: *NOTICE*: l2p maximum resident size is: 9 (of 10) MiB 00:27:26.814 [2024-12-06 18:25:37.133542] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:26.814 [2024-12-06 18:25:37.133578] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize L2P 00:27:26.814 [2024-12-06 18:25:37.133593] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 15.798 ms 00:27:26.814 [2024-12-06 18:25:37.133605] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:26.814 [2024-12-06 18:25:37.133719] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:26.814 [2024-12-06 18:25:37.133733] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore L2P 00:27:26.814 [2024-12-06 18:25:37.133748] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.008 ms 00:27:26.814 [2024-12-06 18:25:37.133758] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:26.814 [2024-12-06 18:25:37.133850] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:26.814 [2024-12-06 18:25:37.133863] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize band initialization 00:27:26.814 [2024-12-06 18:25:37.133873] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.031 ms 00:27:26.814 [2024-12-06 18:25:37.133884] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:26.814 [2024-12-06 18:25:37.133908] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:26.814 [2024-12-06 18:25:37.133919] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Start core poller 00:27:26.814 [2024-12-06 18:25:37.133929] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.004 ms 00:27:26.814 [2024-12-06 18:25:37.133938] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:26.814 [2024-12-06 18:25:37.133974] mngt/ftl_mngt_self_test.c: 208:ftl_mngt_self_test: *NOTICE*: [FTL][ftl0] Self test skipped 00:27:26.814 [2024-12-06 18:25:37.133987] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:26.814 [2024-12-06 18:25:37.133997] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Self test on startup 00:27:26.814 [2024-12-06 18:25:37.134006] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.013 ms 00:27:26.814 [2024-12-06 18:25:37.134016] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:26.814 [2024-12-06 18:25:37.171223] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:26.814 [2024-12-06 18:25:37.171295] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL dirty state 00:27:26.814 [2024-12-06 18:25:37.171322] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 37.241 ms 00:27:26.814 [2024-12-06 18:25:37.171332] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:26.814 [2024-12-06 18:25:37.171423] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:26.814 [2024-12-06 18:25:37.171436] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize initialization 00:27:26.814 [2024-12-06 18:25:37.171448] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.034 ms 00:27:26.814 [2024-12-06 18:25:37.171458] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:26.814 [2024-12-06 18:25:37.172679] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL startup', duration = 389.498 ms, result 0 00:27:27.778  [2024-12-06T18:25:39.289Z] Copying: 27/1024 [MB] (27 MBps) [2024-12-06T18:25:40.225Z] Copying: 54/1024 [MB] (27 MBps) [2024-12-06T18:25:41.601Z] Copying: 81/1024 [MB] (27 MBps) [2024-12-06T18:25:42.536Z] Copying: 108/1024 [MB] (26 MBps) [2024-12-06T18:25:43.469Z] Copying: 134/1024 [MB] (26 MBps) [2024-12-06T18:25:44.419Z] Copying: 161/1024 [MB] (27 MBps) [2024-12-06T18:25:45.354Z] Copying: 188/1024 [MB] (26 MBps) [2024-12-06T18:25:46.287Z] Copying: 214/1024 [MB] (26 MBps) [2024-12-06T18:25:47.224Z] Copying: 240/1024 [MB] (25 MBps) [2024-12-06T18:25:48.600Z] Copying: 265/1024 [MB] (25 MBps) [2024-12-06T18:25:49.166Z] Copying: 292/1024 [MB] (26 MBps) [2024-12-06T18:25:50.571Z] Copying: 317/1024 [MB] (25 MBps) [2024-12-06T18:25:51.509Z] Copying: 343/1024 [MB] (25 MBps) [2024-12-06T18:25:52.446Z] Copying: 369/1024 [MB] (26 MBps) [2024-12-06T18:25:53.382Z] Copying: 395/1024 [MB] (26 MBps) [2024-12-06T18:25:54.318Z] Copying: 422/1024 [MB] (26 MBps) [2024-12-06T18:25:55.261Z] Copying: 449/1024 [MB] (27 MBps) [2024-12-06T18:25:56.195Z] Copying: 477/1024 [MB] (27 MBps) [2024-12-06T18:25:57.571Z] Copying: 505/1024 [MB] (27 MBps) [2024-12-06T18:25:58.504Z] Copying: 532/1024 [MB] (27 MBps) [2024-12-06T18:25:59.435Z] Copying: 560/1024 [MB] (27 MBps) [2024-12-06T18:26:00.368Z] Copying: 589/1024 [MB] (28 MBps) [2024-12-06T18:26:01.303Z] Copying: 618/1024 [MB] (28 MBps) [2024-12-06T18:26:02.275Z] Copying: 647/1024 [MB] (29 MBps) [2024-12-06T18:26:03.213Z] Copying: 674/1024 [MB] (26 MBps) [2024-12-06T18:26:04.148Z] Copying: 705/1024 [MB] (30 MBps) [2024-12-06T18:26:05.525Z] Copying: 734/1024 [MB] (29 MBps) [2024-12-06T18:26:06.458Z] Copying: 764/1024 [MB] (29 MBps) [2024-12-06T18:26:07.391Z] Copying: 793/1024 [MB] (28 MBps) [2024-12-06T18:26:08.324Z] Copying: 822/1024 [MB] (29 MBps) [2024-12-06T18:26:09.258Z] Copying: 852/1024 [MB] (30 MBps) [2024-12-06T18:26:10.279Z] Copying: 882/1024 [MB] (29 MBps) [2024-12-06T18:26:11.215Z] Copying: 911/1024 [MB] (29 MBps) [2024-12-06T18:26:12.151Z] Copying: 941/1024 [MB] (29 MBps) [2024-12-06T18:26:13.529Z] Copying: 969/1024 [MB] (28 MBps) [2024-12-06T18:26:14.465Z] Copying: 997/1024 [MB] (28 MBps) [2024-12-06T18:26:15.032Z] Copying: 1023/1024 [MB] (25 MBps) [2024-12-06T18:26:15.032Z] Copying: 1024/1024 [MB] (average 27 MBps)[2024-12-06 18:26:14.856092] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:04.456 [2024-12-06 18:26:14.856153] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinit core IO channel 00:28:04.456 [2024-12-06 18:26:14.856180] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.004 ms 00:28:04.456 [2024-12-06 18:26:14.856192] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:04.456 [2024-12-06 18:26:14.857983] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:28:04.456 [2024-12-06 18:26:14.865040] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:04.456 [2024-12-06 18:26:14.865105] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Unregister IO device 00:28:04.456 [2024-12-06 18:26:14.865122] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 7.002 ms 00:28:04.456 [2024-12-06 18:26:14.865133] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:04.456 [2024-12-06 18:26:14.876192] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:04.456 [2024-12-06 18:26:14.876257] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Stop core poller 00:28:04.456 [2024-12-06 18:26:14.876289] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 8.561 ms 00:28:04.456 [2024-12-06 18:26:14.876309] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:04.456 [2024-12-06 18:26:14.901028] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:04.456 [2024-12-06 18:26:14.901283] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist L2P 00:28:04.456 [2024-12-06 18:26:14.901313] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 24.735 ms 00:28:04.456 [2024-12-06 18:26:14.901329] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:04.456 [2024-12-06 18:26:14.906656] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:04.456 [2024-12-06 18:26:14.906703] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finish L2P trims 00:28:04.456 [2024-12-06 18:26:14.906718] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 5.221 ms 00:28:04.456 [2024-12-06 18:26:14.906740] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:04.456 [2024-12-06 18:26:14.944910] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:04.456 [2024-12-06 18:26:14.944969] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist NV cache metadata 00:28:04.456 [2024-12-06 18:26:14.944986] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 38.178 ms 00:28:04.456 [2024-12-06 18:26:14.944997] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:04.456 [2024-12-06 18:26:14.966285] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:04.456 [2024-12-06 18:26:14.966352] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist valid map metadata 00:28:04.456 [2024-12-06 18:26:14.966375] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 21.267 ms 00:28:04.456 [2024-12-06 18:26:14.966385] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:04.714 [2024-12-06 18:26:15.081202] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:04.714 [2024-12-06 18:26:15.081290] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist P2L metadata 00:28:04.714 [2024-12-06 18:26:15.081309] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 114.942 ms 00:28:04.714 [2024-12-06 18:26:15.081320] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:04.714 [2024-12-06 18:26:15.118469] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:04.714 [2024-12-06 18:26:15.118661] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist band info metadata 00:28:04.714 [2024-12-06 18:26:15.118685] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 37.188 ms 00:28:04.714 [2024-12-06 18:26:15.118696] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:04.714 [2024-12-06 18:26:15.155121] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:04.714 [2024-12-06 18:26:15.155168] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist trim metadata 00:28:04.714 [2024-12-06 18:26:15.155184] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 36.440 ms 00:28:04.715 [2024-12-06 18:26:15.155193] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:04.715 [2024-12-06 18:26:15.191671] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:04.715 [2024-12-06 18:26:15.191719] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist superblock 00:28:04.715 [2024-12-06 18:26:15.191734] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 36.494 ms 00:28:04.715 [2024-12-06 18:26:15.191744] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:04.715 [2024-12-06 18:26:15.228998] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:04.715 [2024-12-06 18:26:15.229053] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL clean state 00:28:04.715 [2024-12-06 18:26:15.229069] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 37.212 ms 00:28:04.715 [2024-12-06 18:26:15.229079] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:04.715 [2024-12-06 18:26:15.229128] ftl_debug.c: 165:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Bands validity: 00:28:04.715 [2024-12-06 18:26:15.229147] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 1: 114944 / 261120 wr_cnt: 1 state: open 00:28:04.715 [2024-12-06 18:26:15.229161] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 2: 0 / 261120 wr_cnt: 0 state: free 00:28:04.715 [2024-12-06 18:26:15.229172] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 3: 0 / 261120 wr_cnt: 0 state: free 00:28:04.715 [2024-12-06 18:26:15.229183] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 4: 0 / 261120 wr_cnt: 0 state: free 00:28:04.715 [2024-12-06 18:26:15.229194] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 5: 0 / 261120 wr_cnt: 0 state: free 00:28:04.715 [2024-12-06 18:26:15.229205] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 6: 0 / 261120 wr_cnt: 0 state: free 00:28:04.715 [2024-12-06 18:26:15.229216] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 7: 0 / 261120 wr_cnt: 0 state: free 00:28:04.715 [2024-12-06 18:26:15.229227] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 8: 0 / 261120 wr_cnt: 0 state: free 00:28:04.715 [2024-12-06 18:26:15.229237] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 9: 0 / 261120 wr_cnt: 0 state: free 00:28:04.715 [2024-12-06 18:26:15.229248] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 10: 0 / 261120 wr_cnt: 0 state: free 00:28:04.715 [2024-12-06 18:26:15.229258] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 11: 0 / 261120 wr_cnt: 0 state: free 00:28:04.715 [2024-12-06 18:26:15.229288] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 12: 0 / 261120 wr_cnt: 0 state: free 00:28:04.715 [2024-12-06 18:26:15.229299] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 13: 0 / 261120 wr_cnt: 0 state: free 00:28:04.715 [2024-12-06 18:26:15.229310] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 14: 0 / 261120 wr_cnt: 0 state: free 00:28:04.715 [2024-12-06 18:26:15.229320] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 15: 0 / 261120 wr_cnt: 0 state: free 00:28:04.715 [2024-12-06 18:26:15.229331] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 16: 0 / 261120 wr_cnt: 0 state: free 00:28:04.715 [2024-12-06 18:26:15.229342] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 17: 0 / 261120 wr_cnt: 0 state: free 00:28:04.715 [2024-12-06 18:26:15.229352] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 18: 0 / 261120 wr_cnt: 0 state: free 00:28:04.715 [2024-12-06 18:26:15.229363] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 19: 0 / 261120 wr_cnt: 0 state: free 00:28:04.715 [2024-12-06 18:26:15.229374] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 20: 0 / 261120 wr_cnt: 0 state: free 00:28:04.715 [2024-12-06 18:26:15.229384] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 21: 0 / 261120 wr_cnt: 0 state: free 00:28:04.715 [2024-12-06 18:26:15.229394] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 22: 0 / 261120 wr_cnt: 0 state: free 00:28:04.715 [2024-12-06 18:26:15.229407] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 23: 0 / 261120 wr_cnt: 0 state: free 00:28:04.715 [2024-12-06 18:26:15.229417] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 24: 0 / 261120 wr_cnt: 0 state: free 00:28:04.715 [2024-12-06 18:26:15.229427] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 25: 0 / 261120 wr_cnt: 0 state: free 00:28:04.715 [2024-12-06 18:26:15.229437] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 26: 0 / 261120 wr_cnt: 0 state: free 00:28:04.715 [2024-12-06 18:26:15.229447] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 27: 0 / 261120 wr_cnt: 0 state: free 00:28:04.715 [2024-12-06 18:26:15.229458] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 28: 0 / 261120 wr_cnt: 0 state: free 00:28:04.715 [2024-12-06 18:26:15.229468] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 29: 0 / 261120 wr_cnt: 0 state: free 00:28:04.715 [2024-12-06 18:26:15.229478] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 30: 0 / 261120 wr_cnt: 0 state: free 00:28:04.715 [2024-12-06 18:26:15.229490] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 31: 0 / 261120 wr_cnt: 0 state: free 00:28:04.715 [2024-12-06 18:26:15.229500] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 32: 0 / 261120 wr_cnt: 0 state: free 00:28:04.715 [2024-12-06 18:26:15.229511] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 33: 0 / 261120 wr_cnt: 0 state: free 00:28:04.715 [2024-12-06 18:26:15.229521] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 34: 0 / 261120 wr_cnt: 0 state: free 00:28:04.715 [2024-12-06 18:26:15.229531] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 35: 0 / 261120 wr_cnt: 0 state: free 00:28:04.715 [2024-12-06 18:26:15.229542] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 36: 0 / 261120 wr_cnt: 0 state: free 00:28:04.715 [2024-12-06 18:26:15.229552] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 37: 0 / 261120 wr_cnt: 0 state: free 00:28:04.715 [2024-12-06 18:26:15.229562] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 38: 0 / 261120 wr_cnt: 0 state: free 00:28:04.715 [2024-12-06 18:26:15.229573] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 39: 0 / 261120 wr_cnt: 0 state: free 00:28:04.715 [2024-12-06 18:26:15.229583] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 40: 0 / 261120 wr_cnt: 0 state: free 00:28:04.715 [2024-12-06 18:26:15.229593] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 41: 0 / 261120 wr_cnt: 0 state: free 00:28:04.715 [2024-12-06 18:26:15.229604] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 42: 0 / 261120 wr_cnt: 0 state: free 00:28:04.715 [2024-12-06 18:26:15.229614] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 43: 0 / 261120 wr_cnt: 0 state: free 00:28:04.715 [2024-12-06 18:26:15.229624] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 44: 0 / 261120 wr_cnt: 0 state: free 00:28:04.715 [2024-12-06 18:26:15.229634] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 45: 0 / 261120 wr_cnt: 0 state: free 00:28:04.715 [2024-12-06 18:26:15.229645] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 46: 0 / 261120 wr_cnt: 0 state: free 00:28:04.715 [2024-12-06 18:26:15.229655] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 47: 0 / 261120 wr_cnt: 0 state: free 00:28:04.715 [2024-12-06 18:26:15.229665] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 48: 0 / 261120 wr_cnt: 0 state: free 00:28:04.715 [2024-12-06 18:26:15.229675] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 49: 0 / 261120 wr_cnt: 0 state: free 00:28:04.715 [2024-12-06 18:26:15.229685] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 50: 0 / 261120 wr_cnt: 0 state: free 00:28:04.715 [2024-12-06 18:26:15.229695] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 51: 0 / 261120 wr_cnt: 0 state: free 00:28:04.715 [2024-12-06 18:26:15.229706] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 52: 0 / 261120 wr_cnt: 0 state: free 00:28:04.715 [2024-12-06 18:26:15.229716] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 53: 0 / 261120 wr_cnt: 0 state: free 00:28:04.715 [2024-12-06 18:26:15.229727] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 54: 0 / 261120 wr_cnt: 0 state: free 00:28:04.715 [2024-12-06 18:26:15.229737] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 55: 0 / 261120 wr_cnt: 0 state: free 00:28:04.715 [2024-12-06 18:26:15.229748] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 56: 0 / 261120 wr_cnt: 0 state: free 00:28:04.715 [2024-12-06 18:26:15.229758] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 57: 0 / 261120 wr_cnt: 0 state: free 00:28:04.715 [2024-12-06 18:26:15.229768] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 58: 0 / 261120 wr_cnt: 0 state: free 00:28:04.715 [2024-12-06 18:26:15.229778] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 59: 0 / 261120 wr_cnt: 0 state: free 00:28:04.715 [2024-12-06 18:26:15.229788] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 60: 0 / 261120 wr_cnt: 0 state: free 00:28:04.715 [2024-12-06 18:26:15.229800] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 61: 0 / 261120 wr_cnt: 0 state: free 00:28:04.715 [2024-12-06 18:26:15.229813] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 62: 0 / 261120 wr_cnt: 0 state: free 00:28:04.715 [2024-12-06 18:26:15.229824] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 63: 0 / 261120 wr_cnt: 0 state: free 00:28:04.715 [2024-12-06 18:26:15.229834] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 64: 0 / 261120 wr_cnt: 0 state: free 00:28:04.715 [2024-12-06 18:26:15.229845] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 65: 0 / 261120 wr_cnt: 0 state: free 00:28:04.715 [2024-12-06 18:26:15.229856] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 66: 0 / 261120 wr_cnt: 0 state: free 00:28:04.715 [2024-12-06 18:26:15.229866] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 67: 0 / 261120 wr_cnt: 0 state: free 00:28:04.715 [2024-12-06 18:26:15.229877] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 68: 0 / 261120 wr_cnt: 0 state: free 00:28:04.715 [2024-12-06 18:26:15.229888] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 69: 0 / 261120 wr_cnt: 0 state: free 00:28:04.715 [2024-12-06 18:26:15.229898] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 70: 0 / 261120 wr_cnt: 0 state: free 00:28:04.715 [2024-12-06 18:26:15.229908] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 71: 0 / 261120 wr_cnt: 0 state: free 00:28:04.715 [2024-12-06 18:26:15.229919] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 72: 0 / 261120 wr_cnt: 0 state: free 00:28:04.715 [2024-12-06 18:26:15.229930] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 73: 0 / 261120 wr_cnt: 0 state: free 00:28:04.715 [2024-12-06 18:26:15.229941] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 74: 0 / 261120 wr_cnt: 0 state: free 00:28:04.715 [2024-12-06 18:26:15.229951] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 75: 0 / 261120 wr_cnt: 0 state: free 00:28:04.715 [2024-12-06 18:26:15.229961] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 76: 0 / 261120 wr_cnt: 0 state: free 00:28:04.715 [2024-12-06 18:26:15.229972] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 77: 0 / 261120 wr_cnt: 0 state: free 00:28:04.715 [2024-12-06 18:26:15.229982] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 78: 0 / 261120 wr_cnt: 0 state: free 00:28:04.715 [2024-12-06 18:26:15.229992] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 79: 0 / 261120 wr_cnt: 0 state: free 00:28:04.715 [2024-12-06 18:26:15.230002] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 80: 0 / 261120 wr_cnt: 0 state: free 00:28:04.715 [2024-12-06 18:26:15.230013] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 81: 0 / 261120 wr_cnt: 0 state: free 00:28:04.715 [2024-12-06 18:26:15.230023] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 82: 0 / 261120 wr_cnt: 0 state: free 00:28:04.715 [2024-12-06 18:26:15.230033] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 83: 0 / 261120 wr_cnt: 0 state: free 00:28:04.715 [2024-12-06 18:26:15.230043] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 84: 0 / 261120 wr_cnt: 0 state: free 00:28:04.715 [2024-12-06 18:26:15.230053] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 85: 0 / 261120 wr_cnt: 0 state: free 00:28:04.715 [2024-12-06 18:26:15.230063] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 86: 0 / 261120 wr_cnt: 0 state: free 00:28:04.715 [2024-12-06 18:26:15.230073] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 87: 0 / 261120 wr_cnt: 0 state: free 00:28:04.715 [2024-12-06 18:26:15.230083] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 88: 0 / 261120 wr_cnt: 0 state: free 00:28:04.715 [2024-12-06 18:26:15.230094] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 89: 0 / 261120 wr_cnt: 0 state: free 00:28:04.715 [2024-12-06 18:26:15.230104] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 90: 0 / 261120 wr_cnt: 0 state: free 00:28:04.715 [2024-12-06 18:26:15.230114] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 91: 0 / 261120 wr_cnt: 0 state: free 00:28:04.715 [2024-12-06 18:26:15.230124] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 92: 0 / 261120 wr_cnt: 0 state: free 00:28:04.715 [2024-12-06 18:26:15.230135] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 93: 0 / 261120 wr_cnt: 0 state: free 00:28:04.715 [2024-12-06 18:26:15.230145] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 94: 0 / 261120 wr_cnt: 0 state: free 00:28:04.715 [2024-12-06 18:26:15.230155] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 95: 0 / 261120 wr_cnt: 0 state: free 00:28:04.715 [2024-12-06 18:26:15.230165] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 96: 0 / 261120 wr_cnt: 0 state: free 00:28:04.715 [2024-12-06 18:26:15.230175] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 97: 0 / 261120 wr_cnt: 0 state: free 00:28:04.715 [2024-12-06 18:26:15.230185] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 98: 0 / 261120 wr_cnt: 0 state: free 00:28:04.715 [2024-12-06 18:26:15.230195] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 99: 0 / 261120 wr_cnt: 0 state: free 00:28:04.715 [2024-12-06 18:26:15.230205] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 100: 0 / 261120 wr_cnt: 0 state: free 00:28:04.715 [2024-12-06 18:26:15.230222] ftl_debug.c: 211:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] 00:28:04.715 [2024-12-06 18:26:15.230232] ftl_debug.c: 212:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] device UUID: 246e93d4-98de-4402-8d08-9ff86994df11 00:28:04.715 [2024-12-06 18:26:15.230243] ftl_debug.c: 213:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total valid LBAs: 114944 00:28:04.715 [2024-12-06 18:26:15.230253] ftl_debug.c: 214:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total writes: 115904 00:28:04.715 [2024-12-06 18:26:15.230270] ftl_debug.c: 215:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] user writes: 114944 00:28:04.715 [2024-12-06 18:26:15.230282] ftl_debug.c: 216:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] WAF: 1.0084 00:28:04.715 [2024-12-06 18:26:15.230309] ftl_debug.c: 218:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] limits: 00:28:04.715 [2024-12-06 18:26:15.230320] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] crit: 0 00:28:04.715 [2024-12-06 18:26:15.230330] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] high: 0 00:28:04.715 [2024-12-06 18:26:15.230339] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] low: 0 00:28:04.715 [2024-12-06 18:26:15.230347] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] start: 0 00:28:04.715 [2024-12-06 18:26:15.230357] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:04.715 [2024-12-06 18:26:15.230374] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Dump statistics 00:28:04.715 [2024-12-06 18:26:15.230385] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.233 ms 00:28:04.715 [2024-12-06 18:26:15.230394] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:04.715 [2024-12-06 18:26:15.250306] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:04.715 [2024-12-06 18:26:15.250356] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize L2P 00:28:04.715 [2024-12-06 18:26:15.250385] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 19.902 ms 00:28:04.715 [2024-12-06 18:26:15.250396] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:04.715 [2024-12-06 18:26:15.250917] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:04.715 [2024-12-06 18:26:15.250934] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize P2L checkpointing 00:28:04.715 [2024-12-06 18:26:15.250945] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.489 ms 00:28:04.715 [2024-12-06 18:26:15.250955] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:05.042 [2024-12-06 18:26:15.301783] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:28:05.042 [2024-12-06 18:26:15.301844] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:28:05.042 [2024-12-06 18:26:15.301860] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:28:05.042 [2024-12-06 18:26:15.301870] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:05.042 [2024-12-06 18:26:15.301938] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:28:05.042 [2024-12-06 18:26:15.301949] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:28:05.042 [2024-12-06 18:26:15.301960] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:28:05.042 [2024-12-06 18:26:15.301969] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:05.042 [2024-12-06 18:26:15.302065] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:28:05.042 [2024-12-06 18:26:15.302083] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:28:05.042 [2024-12-06 18:26:15.302094] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:28:05.042 [2024-12-06 18:26:15.302103] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:05.042 [2024-12-06 18:26:15.302121] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:28:05.042 [2024-12-06 18:26:15.302131] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:28:05.042 [2024-12-06 18:26:15.302142] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:28:05.042 [2024-12-06 18:26:15.302152] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:05.042 [2024-12-06 18:26:15.428194] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:28:05.042 [2024-12-06 18:26:15.428471] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:28:05.042 [2024-12-06 18:26:15.428495] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:28:05.042 [2024-12-06 18:26:15.428507] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:05.042 [2024-12-06 18:26:15.533179] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:28:05.042 [2024-12-06 18:26:15.533254] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:28:05.042 [2024-12-06 18:26:15.533300] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:28:05.042 [2024-12-06 18:26:15.533312] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:05.042 [2024-12-06 18:26:15.533403] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:28:05.042 [2024-12-06 18:26:15.533415] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:28:05.042 [2024-12-06 18:26:15.533426] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:28:05.042 [2024-12-06 18:26:15.533446] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:05.042 [2024-12-06 18:26:15.533500] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:28:05.042 [2024-12-06 18:26:15.533512] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:28:05.042 [2024-12-06 18:26:15.533522] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:28:05.042 [2024-12-06 18:26:15.533532] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:05.042 [2024-12-06 18:26:15.533650] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:28:05.042 [2024-12-06 18:26:15.533664] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:28:05.042 [2024-12-06 18:26:15.533674] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:28:05.042 [2024-12-06 18:26:15.533688] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:05.042 [2024-12-06 18:26:15.533723] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:28:05.042 [2024-12-06 18:26:15.533735] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize superblock 00:28:05.042 [2024-12-06 18:26:15.533745] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:28:05.042 [2024-12-06 18:26:15.533755] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:05.043 [2024-12-06 18:26:15.533793] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:28:05.043 [2024-12-06 18:26:15.533804] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:28:05.043 [2024-12-06 18:26:15.533814] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:28:05.043 [2024-12-06 18:26:15.533823] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:05.043 [2024-12-06 18:26:15.533868] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:28:05.043 [2024-12-06 18:26:15.533879] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:28:05.043 [2024-12-06 18:26:15.533889] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:28:05.043 [2024-12-06 18:26:15.533899] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:05.043 [2024-12-06 18:26:15.534016] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL shutdown', duration = 679.843 ms, result 0 00:28:06.939 00:28:06.940 00:28:06.940 18:26:17 ftl.ftl_restore -- ftl/restore.sh@80 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=ftl0 --of=/home/vagrant/spdk_repo/spdk/test/ftl/testfile --json=/home/vagrant/spdk_repo/spdk/test/ftl/config/ftl.json --skip=131072 --count=262144 00:28:06.940 [2024-12-06 18:26:17.189084] Starting SPDK v25.01-pre git sha1 1148849d6 / DPDK 24.03.0 initialization... 00:28:06.940 [2024-12-06 18:26:17.189214] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid80493 ] 00:28:06.940 [2024-12-06 18:26:17.368697] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:28:06.940 [2024-12-06 18:26:17.483392] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:28:07.507 [2024-12-06 18:26:17.818003] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:28:07.507 [2024-12-06 18:26:17.818083] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:28:07.507 [2024-12-06 18:26:17.978742] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:07.507 [2024-12-06 18:26:17.979012] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Check configuration 00:28:07.507 [2024-12-06 18:26:17.979036] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.004 ms 00:28:07.507 [2024-12-06 18:26:17.979047] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:07.507 [2024-12-06 18:26:17.979110] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:07.507 [2024-12-06 18:26:17.979125] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:28:07.507 [2024-12-06 18:26:17.979136] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.037 ms 00:28:07.507 [2024-12-06 18:26:17.979147] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:07.507 [2024-12-06 18:26:17.979169] mngt/ftl_mngt_bdev.c: 196:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using nvc0n1p0 as write buffer cache 00:28:07.507 [2024-12-06 18:26:17.980223] mngt/ftl_mngt_bdev.c: 236:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using bdev as NV Cache device 00:28:07.507 [2024-12-06 18:26:17.980248] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:07.507 [2024-12-06 18:26:17.980259] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:28:07.507 [2024-12-06 18:26:17.980280] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.086 ms 00:28:07.507 [2024-12-06 18:26:17.980290] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:07.507 [2024-12-06 18:26:17.981709] mngt/ftl_mngt_md.c: 455:ftl_mngt_load_sb: *NOTICE*: [FTL][ftl0] SHM: clean 0, shm_clean 0 00:28:07.507 [2024-12-06 18:26:18.001444] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:07.507 [2024-12-06 18:26:18.001683] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Load super block 00:28:07.507 [2024-12-06 18:26:18.001707] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 19.766 ms 00:28:07.507 [2024-12-06 18:26:18.001718] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:07.507 [2024-12-06 18:26:18.001840] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:07.507 [2024-12-06 18:26:18.001854] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Validate super block 00:28:07.507 [2024-12-06 18:26:18.001865] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.030 ms 00:28:07.507 [2024-12-06 18:26:18.001875] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:07.507 [2024-12-06 18:26:18.009180] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:07.507 [2024-12-06 18:26:18.009417] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:28:07.507 [2024-12-06 18:26:18.009441] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 7.232 ms 00:28:07.507 [2024-12-06 18:26:18.009460] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:07.507 [2024-12-06 18:26:18.009558] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:07.507 [2024-12-06 18:26:18.009571] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:28:07.507 [2024-12-06 18:26:18.009582] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.067 ms 00:28:07.507 [2024-12-06 18:26:18.009593] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:07.507 [2024-12-06 18:26:18.009646] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:07.507 [2024-12-06 18:26:18.009659] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Register IO device 00:28:07.507 [2024-12-06 18:26:18.009670] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.009 ms 00:28:07.507 [2024-12-06 18:26:18.009680] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:07.507 [2024-12-06 18:26:18.009711] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:28:07.507 [2024-12-06 18:26:18.014529] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:07.507 [2024-12-06 18:26:18.014566] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:28:07.507 [2024-12-06 18:26:18.014582] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.833 ms 00:28:07.507 [2024-12-06 18:26:18.014592] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:07.507 [2024-12-06 18:26:18.014626] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:07.507 [2024-12-06 18:26:18.014638] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Decorate bands 00:28:07.507 [2024-12-06 18:26:18.014648] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.010 ms 00:28:07.507 [2024-12-06 18:26:18.014658] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:07.507 [2024-12-06 18:26:18.014721] ftl_layout.c: 613:ftl_layout_setup: *NOTICE*: [FTL][ftl0] FTL layout setup mode 0 00:28:07.507 [2024-12-06 18:26:18.014746] upgrade/ftl_sb_v5.c: 278:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob load 0x150 bytes 00:28:07.507 [2024-12-06 18:26:18.014780] upgrade/ftl_sb_v5.c: 287:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] base layout blob load 0x48 bytes 00:28:07.507 [2024-12-06 18:26:18.014801] upgrade/ftl_sb_v5.c: 294:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] layout blob load 0x190 bytes 00:28:07.507 [2024-12-06 18:26:18.014889] upgrade/ftl_sb_v5.c: 92:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob store 0x150 bytes 00:28:07.507 [2024-12-06 18:26:18.014902] upgrade/ftl_sb_v5.c: 101:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] base layout blob store 0x48 bytes 00:28:07.507 [2024-12-06 18:26:18.014915] upgrade/ftl_sb_v5.c: 109:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] layout blob store 0x190 bytes 00:28:07.507 [2024-12-06 18:26:18.014928] ftl_layout.c: 685:ftl_layout_setup: *NOTICE*: [FTL][ftl0] Base device capacity: 103424.00 MiB 00:28:07.507 [2024-12-06 18:26:18.014940] ftl_layout.c: 687:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache device capacity: 5171.00 MiB 00:28:07.507 [2024-12-06 18:26:18.014951] ftl_layout.c: 689:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P entries: 20971520 00:28:07.507 [2024-12-06 18:26:18.014962] ftl_layout.c: 690:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P address size: 4 00:28:07.507 [2024-12-06 18:26:18.014975] ftl_layout.c: 691:ftl_layout_setup: *NOTICE*: [FTL][ftl0] P2L checkpoint pages: 2048 00:28:07.507 [2024-12-06 18:26:18.014984] ftl_layout.c: 692:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache chunk count 5 00:28:07.507 [2024-12-06 18:26:18.014995] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:07.507 [2024-12-06 18:26:18.015005] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize layout 00:28:07.507 [2024-12-06 18:26:18.015016] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.278 ms 00:28:07.507 [2024-12-06 18:26:18.015026] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:07.507 [2024-12-06 18:26:18.015100] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:07.507 [2024-12-06 18:26:18.015111] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Verify layout 00:28:07.507 [2024-12-06 18:26:18.015121] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.055 ms 00:28:07.507 [2024-12-06 18:26:18.015131] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:07.507 [2024-12-06 18:26:18.015228] ftl_layout.c: 768:ftl_layout_dump: *NOTICE*: [FTL][ftl0] NV cache layout: 00:28:07.507 [2024-12-06 18:26:18.015243] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb 00:28:07.507 [2024-12-06 18:26:18.015254] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:28:07.507 [2024-12-06 18:26:18.015287] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:28:07.507 [2024-12-06 18:26:18.015298] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region l2p 00:28:07.507 [2024-12-06 18:26:18.015308] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.12 MiB 00:28:07.507 [2024-12-06 18:26:18.015317] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 80.00 MiB 00:28:07.508 [2024-12-06 18:26:18.015327] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md 00:28:07.508 [2024-12-06 18:26:18.015337] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.12 MiB 00:28:07.508 [2024-12-06 18:26:18.015346] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:28:07.508 [2024-12-06 18:26:18.015356] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md_mirror 00:28:07.508 [2024-12-06 18:26:18.015366] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.62 MiB 00:28:07.508 [2024-12-06 18:26:18.015375] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:28:07.508 [2024-12-06 18:26:18.015394] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md 00:28:07.508 [2024-12-06 18:26:18.015404] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.88 MiB 00:28:07.508 [2024-12-06 18:26:18.015413] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:28:07.508 [2024-12-06 18:26:18.015422] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md_mirror 00:28:07.508 [2024-12-06 18:26:18.015432] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 114.00 MiB 00:28:07.508 [2024-12-06 18:26:18.015440] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:28:07.508 [2024-12-06 18:26:18.015450] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l0 00:28:07.508 [2024-12-06 18:26:18.015459] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 81.12 MiB 00:28:07.508 [2024-12-06 18:26:18.015468] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:28:07.508 [2024-12-06 18:26:18.015477] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l1 00:28:07.508 [2024-12-06 18:26:18.015486] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 89.12 MiB 00:28:07.508 [2024-12-06 18:26:18.015494] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:28:07.508 [2024-12-06 18:26:18.015503] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l2 00:28:07.508 [2024-12-06 18:26:18.015512] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 97.12 MiB 00:28:07.508 [2024-12-06 18:26:18.015521] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:28:07.508 [2024-12-06 18:26:18.015529] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l3 00:28:07.508 [2024-12-06 18:26:18.015538] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 105.12 MiB 00:28:07.508 [2024-12-06 18:26:18.015547] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:28:07.508 [2024-12-06 18:26:18.015556] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md 00:28:07.508 [2024-12-06 18:26:18.015565] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.12 MiB 00:28:07.508 [2024-12-06 18:26:18.015574] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:28:07.508 [2024-12-06 18:26:18.015583] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md_mirror 00:28:07.508 [2024-12-06 18:26:18.015591] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.38 MiB 00:28:07.508 [2024-12-06 18:26:18.015600] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:28:07.508 [2024-12-06 18:26:18.015609] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log 00:28:07.508 [2024-12-06 18:26:18.015618] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.62 MiB 00:28:07.508 [2024-12-06 18:26:18.015626] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:28:07.508 [2024-12-06 18:26:18.015635] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log_mirror 00:28:07.508 [2024-12-06 18:26:18.015644] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.75 MiB 00:28:07.508 [2024-12-06 18:26:18.015655] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:28:07.508 [2024-12-06 18:26:18.015664] ftl_layout.c: 775:ftl_layout_dump: *NOTICE*: [FTL][ftl0] Base device layout: 00:28:07.508 [2024-12-06 18:26:18.015674] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb_mirror 00:28:07.508 [2024-12-06 18:26:18.015683] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:28:07.508 [2024-12-06 18:26:18.015693] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:28:07.508 [2024-12-06 18:26:18.015702] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region vmap 00:28:07.508 [2024-12-06 18:26:18.015712] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 102400.25 MiB 00:28:07.508 [2024-12-06 18:26:18.015721] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 3.38 MiB 00:28:07.508 [2024-12-06 18:26:18.015730] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region data_btm 00:28:07.508 [2024-12-06 18:26:18.015739] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.25 MiB 00:28:07.508 [2024-12-06 18:26:18.015749] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 102400.00 MiB 00:28:07.508 [2024-12-06 18:26:18.015759] upgrade/ftl_sb_v5.c: 408:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - nvc: 00:28:07.508 [2024-12-06 18:26:18.015771] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x0 ver:5 blk_offs:0x0 blk_sz:0x20 00:28:07.508 [2024-12-06 18:26:18.015786] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x2 ver:0 blk_offs:0x20 blk_sz:0x5000 00:28:07.508 [2024-12-06 18:26:18.015796] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x3 ver:2 blk_offs:0x5020 blk_sz:0x80 00:28:07.508 [2024-12-06 18:26:18.015807] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x4 ver:2 blk_offs:0x50a0 blk_sz:0x80 00:28:07.508 [2024-12-06 18:26:18.015817] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xa ver:2 blk_offs:0x5120 blk_sz:0x800 00:28:07.508 [2024-12-06 18:26:18.015828] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xb ver:2 blk_offs:0x5920 blk_sz:0x800 00:28:07.508 [2024-12-06 18:26:18.015838] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xc ver:2 blk_offs:0x6120 blk_sz:0x800 00:28:07.508 [2024-12-06 18:26:18.015848] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xd ver:2 blk_offs:0x6920 blk_sz:0x800 00:28:07.508 [2024-12-06 18:26:18.015859] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xe ver:0 blk_offs:0x7120 blk_sz:0x40 00:28:07.508 [2024-12-06 18:26:18.015869] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xf ver:0 blk_offs:0x7160 blk_sz:0x40 00:28:07.508 [2024-12-06 18:26:18.015879] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x10 ver:1 blk_offs:0x71a0 blk_sz:0x20 00:28:07.508 [2024-12-06 18:26:18.015889] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x11 ver:1 blk_offs:0x71c0 blk_sz:0x20 00:28:07.508 [2024-12-06 18:26:18.015898] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x6 ver:2 blk_offs:0x71e0 blk_sz:0x20 00:28:07.508 [2024-12-06 18:26:18.015908] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x7 ver:2 blk_offs:0x7200 blk_sz:0x20 00:28:07.508 [2024-12-06 18:26:18.015919] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x7220 blk_sz:0x13c0e0 00:28:07.508 [2024-12-06 18:26:18.015929] upgrade/ftl_sb_v5.c: 422:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - base dev: 00:28:07.508 [2024-12-06 18:26:18.015939] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x1 ver:5 blk_offs:0x0 blk_sz:0x20 00:28:07.508 [2024-12-06 18:26:18.015950] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x20 blk_sz:0x20 00:28:07.508 [2024-12-06 18:26:18.015960] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x9 ver:0 blk_offs:0x40 blk_sz:0x1900000 00:28:07.508 [2024-12-06 18:26:18.015970] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x5 ver:0 blk_offs:0x1900040 blk_sz:0x360 00:28:07.508 [2024-12-06 18:26:18.015981] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x19003a0 blk_sz:0x3fc60 00:28:07.508 [2024-12-06 18:26:18.015991] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:07.508 [2024-12-06 18:26:18.016002] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Layout upgrade 00:28:07.508 [2024-12-06 18:26:18.016013] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.819 ms 00:28:07.508 [2024-12-06 18:26:18.016022] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:07.508 [2024-12-06 18:26:18.055429] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:07.508 [2024-12-06 18:26:18.055490] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:28:07.508 [2024-12-06 18:26:18.055506] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 39.418 ms 00:28:07.509 [2024-12-06 18:26:18.055521] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:07.509 [2024-12-06 18:26:18.055621] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:07.509 [2024-12-06 18:26:18.055632] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize band addresses 00:28:07.509 [2024-12-06 18:26:18.055643] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.056 ms 00:28:07.509 [2024-12-06 18:26:18.055652] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:07.767 [2024-12-06 18:26:18.112432] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:07.767 [2024-12-06 18:26:18.112487] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:28:07.767 [2024-12-06 18:26:18.112503] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 56.785 ms 00:28:07.767 [2024-12-06 18:26:18.112514] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:07.767 [2024-12-06 18:26:18.112578] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:07.767 [2024-12-06 18:26:18.112589] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:28:07.767 [2024-12-06 18:26:18.112605] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.004 ms 00:28:07.767 [2024-12-06 18:26:18.112615] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:07.767 [2024-12-06 18:26:18.113111] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:07.767 [2024-12-06 18:26:18.113126] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:28:07.767 [2024-12-06 18:26:18.113137] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.415 ms 00:28:07.767 [2024-12-06 18:26:18.113147] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:07.767 [2024-12-06 18:26:18.113288] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:07.767 [2024-12-06 18:26:18.113302] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:28:07.767 [2024-12-06 18:26:18.113319] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.119 ms 00:28:07.767 [2024-12-06 18:26:18.113329] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:07.767 [2024-12-06 18:26:18.132102] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:07.767 [2024-12-06 18:26:18.132156] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:28:07.767 [2024-12-06 18:26:18.132172] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 18.780 ms 00:28:07.767 [2024-12-06 18:26:18.132184] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:07.767 [2024-12-06 18:26:18.152047] ftl_nv_cache.c:1772:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: full chunks = 4, empty chunks = 0 00:28:07.767 [2024-12-06 18:26:18.152112] ftl_nv_cache.c:1776:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: state loaded successfully 00:28:07.767 [2024-12-06 18:26:18.152130] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:07.767 [2024-12-06 18:26:18.152141] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore NV cache metadata 00:28:07.767 [2024-12-06 18:26:18.152155] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 19.825 ms 00:28:07.767 [2024-12-06 18:26:18.152165] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:07.767 [2024-12-06 18:26:18.183671] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:07.767 [2024-12-06 18:26:18.183756] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore valid map metadata 00:28:07.768 [2024-12-06 18:26:18.183773] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 31.479 ms 00:28:07.768 [2024-12-06 18:26:18.183785] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:07.768 [2024-12-06 18:26:18.203690] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:07.768 [2024-12-06 18:26:18.203761] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore band info metadata 00:28:07.768 [2024-12-06 18:26:18.203776] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 19.856 ms 00:28:07.768 [2024-12-06 18:26:18.203787] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:07.768 [2024-12-06 18:26:18.223404] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:07.768 [2024-12-06 18:26:18.223474] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore trim metadata 00:28:07.768 [2024-12-06 18:26:18.223489] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 19.568 ms 00:28:07.768 [2024-12-06 18:26:18.223500] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:07.768 [2024-12-06 18:26:18.224379] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:07.768 [2024-12-06 18:26:18.224404] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize P2L checkpointing 00:28:07.768 [2024-12-06 18:26:18.224420] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.686 ms 00:28:07.768 [2024-12-06 18:26:18.224430] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:07.768 [2024-12-06 18:26:18.311949] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:07.768 [2024-12-06 18:26:18.312028] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore P2L checkpoints 00:28:07.768 [2024-12-06 18:26:18.312056] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 87.634 ms 00:28:07.768 [2024-12-06 18:26:18.312067] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:07.768 [2024-12-06 18:26:18.325957] ftl_l2p_cache.c: 458:ftl_l2p_cache_init: *NOTICE*: l2p maximum resident size is: 9 (of 10) MiB 00:28:07.768 [2024-12-06 18:26:18.329559] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:07.768 [2024-12-06 18:26:18.329608] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize L2P 00:28:07.768 [2024-12-06 18:26:18.329625] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 17.443 ms 00:28:07.768 [2024-12-06 18:26:18.329636] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:07.768 [2024-12-06 18:26:18.329760] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:07.768 [2024-12-06 18:26:18.329774] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore L2P 00:28:07.768 [2024-12-06 18:26:18.329790] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.010 ms 00:28:07.768 [2024-12-06 18:26:18.329800] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:07.768 [2024-12-06 18:26:18.331465] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:07.768 [2024-12-06 18:26:18.331512] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize band initialization 00:28:07.768 [2024-12-06 18:26:18.331525] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.598 ms 00:28:07.768 [2024-12-06 18:26:18.331535] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:07.768 [2024-12-06 18:26:18.331580] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:07.768 [2024-12-06 18:26:18.331592] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Start core poller 00:28:07.768 [2024-12-06 18:26:18.331602] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.005 ms 00:28:07.768 [2024-12-06 18:26:18.331613] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:07.768 [2024-12-06 18:26:18.331669] mngt/ftl_mngt_self_test.c: 208:ftl_mngt_self_test: *NOTICE*: [FTL][ftl0] Self test skipped 00:28:07.768 [2024-12-06 18:26:18.331682] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:07.768 [2024-12-06 18:26:18.331693] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Self test on startup 00:28:07.768 [2024-12-06 18:26:18.331703] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.014 ms 00:28:07.768 [2024-12-06 18:26:18.331713] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:08.026 [2024-12-06 18:26:18.370755] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:08.026 [2024-12-06 18:26:18.370833] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL dirty state 00:28:08.026 [2024-12-06 18:26:18.370861] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 39.079 ms 00:28:08.026 [2024-12-06 18:26:18.370872] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:08.026 [2024-12-06 18:26:18.370986] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:08.026 [2024-12-06 18:26:18.370999] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize initialization 00:28:08.026 [2024-12-06 18:26:18.371010] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.038 ms 00:28:08.026 [2024-12-06 18:26:18.371020] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:08.026 [2024-12-06 18:26:18.372276] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL startup', duration = 393.701 ms, result 0 00:28:09.439  [2024-12-06T18:26:20.968Z] Copying: 26/1024 [MB] (26 MBps) [2024-12-06T18:26:21.905Z] Copying: 56/1024 [MB] (29 MBps) [2024-12-06T18:26:22.842Z] Copying: 84/1024 [MB] (28 MBps) [2024-12-06T18:26:23.777Z] Copying: 112/1024 [MB] (27 MBps) [2024-12-06T18:26:24.719Z] Copying: 140/1024 [MB] (28 MBps) [2024-12-06T18:26:25.658Z] Copying: 169/1024 [MB] (28 MBps) [2024-12-06T18:26:26.595Z] Copying: 197/1024 [MB] (28 MBps) [2024-12-06T18:26:27.982Z] Copying: 225/1024 [MB] (27 MBps) [2024-12-06T18:26:28.921Z] Copying: 253/1024 [MB] (27 MBps) [2024-12-06T18:26:29.859Z] Copying: 279/1024 [MB] (26 MBps) [2024-12-06T18:26:30.796Z] Copying: 307/1024 [MB] (28 MBps) [2024-12-06T18:26:31.734Z] Copying: 336/1024 [MB] (28 MBps) [2024-12-06T18:26:32.671Z] Copying: 363/1024 [MB] (27 MBps) [2024-12-06T18:26:33.609Z] Copying: 390/1024 [MB] (27 MBps) [2024-12-06T18:26:35.011Z] Copying: 418/1024 [MB] (28 MBps) [2024-12-06T18:26:35.603Z] Copying: 445/1024 [MB] (26 MBps) [2024-12-06T18:26:36.978Z] Copying: 473/1024 [MB] (27 MBps) [2024-12-06T18:26:37.913Z] Copying: 499/1024 [MB] (26 MBps) [2024-12-06T18:26:38.847Z] Copying: 525/1024 [MB] (26 MBps) [2024-12-06T18:26:39.784Z] Copying: 552/1024 [MB] (26 MBps) [2024-12-06T18:26:40.722Z] Copying: 579/1024 [MB] (27 MBps) [2024-12-06T18:26:41.660Z] Copying: 606/1024 [MB] (27 MBps) [2024-12-06T18:26:42.670Z] Copying: 633/1024 [MB] (27 MBps) [2024-12-06T18:26:43.608Z] Copying: 661/1024 [MB] (27 MBps) [2024-12-06T18:26:44.987Z] Copying: 689/1024 [MB] (27 MBps) [2024-12-06T18:26:45.923Z] Copying: 716/1024 [MB] (26 MBps) [2024-12-06T18:26:46.861Z] Copying: 742/1024 [MB] (26 MBps) [2024-12-06T18:26:47.798Z] Copying: 770/1024 [MB] (27 MBps) [2024-12-06T18:26:48.735Z] Copying: 799/1024 [MB] (29 MBps) [2024-12-06T18:26:49.671Z] Copying: 828/1024 [MB] (28 MBps) [2024-12-06T18:26:50.633Z] Copying: 859/1024 [MB] (30 MBps) [2024-12-06T18:26:51.569Z] Copying: 889/1024 [MB] (30 MBps) [2024-12-06T18:26:52.944Z] Copying: 918/1024 [MB] (29 MBps) [2024-12-06T18:26:53.879Z] Copying: 947/1024 [MB] (28 MBps) [2024-12-06T18:26:54.811Z] Copying: 975/1024 [MB] (28 MBps) [2024-12-06T18:26:55.376Z] Copying: 1004/1024 [MB] (28 MBps) [2024-12-06T18:26:55.376Z] Copying: 1024/1024 [MB] (average 27 MBps)[2024-12-06 18:26:55.260701] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:44.800 [2024-12-06 18:26:55.260754] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinit core IO channel 00:28:44.800 [2024-12-06 18:26:55.260778] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.002 ms 00:28:44.800 [2024-12-06 18:26:55.260790] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:44.800 [2024-12-06 18:26:55.260812] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:28:44.800 [2024-12-06 18:26:55.265329] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:44.800 [2024-12-06 18:26:55.265364] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Unregister IO device 00:28:44.800 [2024-12-06 18:26:55.265377] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.507 ms 00:28:44.800 [2024-12-06 18:26:55.265388] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:44.800 [2024-12-06 18:26:55.265571] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:44.800 [2024-12-06 18:26:55.265583] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Stop core poller 00:28:44.800 [2024-12-06 18:26:55.265594] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.159 ms 00:28:44.800 [2024-12-06 18:26:55.265610] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:44.800 [2024-12-06 18:26:55.269758] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:44.800 [2024-12-06 18:26:55.269799] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist L2P 00:28:44.800 [2024-12-06 18:26:55.269812] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.138 ms 00:28:44.800 [2024-12-06 18:26:55.269823] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:44.800 [2024-12-06 18:26:55.275191] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:44.800 [2024-12-06 18:26:55.275231] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finish L2P trims 00:28:44.800 [2024-12-06 18:26:55.275243] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 5.341 ms 00:28:44.800 [2024-12-06 18:26:55.275259] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:44.800 [2024-12-06 18:26:55.312612] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:44.800 [2024-12-06 18:26:55.312653] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist NV cache metadata 00:28:44.800 [2024-12-06 18:26:55.312667] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 37.339 ms 00:28:44.800 [2024-12-06 18:26:55.312677] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:44.800 [2024-12-06 18:26:55.334126] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:44.800 [2024-12-06 18:26:55.334167] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist valid map metadata 00:28:44.800 [2024-12-06 18:26:55.334180] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 21.445 ms 00:28:44.800 [2024-12-06 18:26:55.334191] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:45.058 [2024-12-06 18:26:55.477040] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:45.058 [2024-12-06 18:26:55.477108] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist P2L metadata 00:28:45.058 [2024-12-06 18:26:55.477125] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 143.038 ms 00:28:45.058 [2024-12-06 18:26:55.477136] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:45.058 [2024-12-06 18:26:55.516051] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:45.058 [2024-12-06 18:26:55.516096] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist band info metadata 00:28:45.058 [2024-12-06 18:26:55.516111] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 38.960 ms 00:28:45.058 [2024-12-06 18:26:55.516121] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:45.058 [2024-12-06 18:26:55.552873] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:45.058 [2024-12-06 18:26:55.552919] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist trim metadata 00:28:45.058 [2024-12-06 18:26:55.552934] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 36.770 ms 00:28:45.058 [2024-12-06 18:26:55.552944] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:45.058 [2024-12-06 18:26:55.589726] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:45.058 [2024-12-06 18:26:55.589767] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist superblock 00:28:45.058 [2024-12-06 18:26:55.589781] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 36.798 ms 00:28:45.058 [2024-12-06 18:26:55.589792] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:45.058 [2024-12-06 18:26:55.625224] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:45.058 [2024-12-06 18:26:55.625277] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL clean state 00:28:45.058 [2024-12-06 18:26:55.625292] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 35.410 ms 00:28:45.058 [2024-12-06 18:26:55.625302] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:45.058 [2024-12-06 18:26:55.625342] ftl_debug.c: 165:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Bands validity: 00:28:45.058 [2024-12-06 18:26:55.625360] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 1: 131072 / 261120 wr_cnt: 1 state: open 00:28:45.058 [2024-12-06 18:26:55.625373] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 2: 0 / 261120 wr_cnt: 0 state: free 00:28:45.058 [2024-12-06 18:26:55.625384] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 3: 0 / 261120 wr_cnt: 0 state: free 00:28:45.058 [2024-12-06 18:26:55.625396] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 4: 0 / 261120 wr_cnt: 0 state: free 00:28:45.058 [2024-12-06 18:26:55.625407] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 5: 0 / 261120 wr_cnt: 0 state: free 00:28:45.058 [2024-12-06 18:26:55.625418] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 6: 0 / 261120 wr_cnt: 0 state: free 00:28:45.058 [2024-12-06 18:26:55.625428] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 7: 0 / 261120 wr_cnt: 0 state: free 00:28:45.058 [2024-12-06 18:26:55.625439] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 8: 0 / 261120 wr_cnt: 0 state: free 00:28:45.058 [2024-12-06 18:26:55.625450] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 9: 0 / 261120 wr_cnt: 0 state: free 00:28:45.058 [2024-12-06 18:26:55.625461] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 10: 0 / 261120 wr_cnt: 0 state: free 00:28:45.058 [2024-12-06 18:26:55.625472] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 11: 0 / 261120 wr_cnt: 0 state: free 00:28:45.058 [2024-12-06 18:26:55.625483] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 12: 0 / 261120 wr_cnt: 0 state: free 00:28:45.058 [2024-12-06 18:26:55.625494] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 13: 0 / 261120 wr_cnt: 0 state: free 00:28:45.058 [2024-12-06 18:26:55.625505] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 14: 0 / 261120 wr_cnt: 0 state: free 00:28:45.058 [2024-12-06 18:26:55.625516] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 15: 0 / 261120 wr_cnt: 0 state: free 00:28:45.058 [2024-12-06 18:26:55.625527] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 16: 0 / 261120 wr_cnt: 0 state: free 00:28:45.058 [2024-12-06 18:26:55.625538] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 17: 0 / 261120 wr_cnt: 0 state: free 00:28:45.058 [2024-12-06 18:26:55.625549] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 18: 0 / 261120 wr_cnt: 0 state: free 00:28:45.058 [2024-12-06 18:26:55.625559] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 19: 0 / 261120 wr_cnt: 0 state: free 00:28:45.058 [2024-12-06 18:26:55.625569] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 20: 0 / 261120 wr_cnt: 0 state: free 00:28:45.058 [2024-12-06 18:26:55.625580] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 21: 0 / 261120 wr_cnt: 0 state: free 00:28:45.058 [2024-12-06 18:26:55.625591] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 22: 0 / 261120 wr_cnt: 0 state: free 00:28:45.058 [2024-12-06 18:26:55.625601] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 23: 0 / 261120 wr_cnt: 0 state: free 00:28:45.058 [2024-12-06 18:26:55.625612] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 24: 0 / 261120 wr_cnt: 0 state: free 00:28:45.058 [2024-12-06 18:26:55.625622] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 25: 0 / 261120 wr_cnt: 0 state: free 00:28:45.058 [2024-12-06 18:26:55.625632] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 26: 0 / 261120 wr_cnt: 0 state: free 00:28:45.058 [2024-12-06 18:26:55.625643] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 27: 0 / 261120 wr_cnt: 0 state: free 00:28:45.058 [2024-12-06 18:26:55.625653] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 28: 0 / 261120 wr_cnt: 0 state: free 00:28:45.058 [2024-12-06 18:26:55.625663] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 29: 0 / 261120 wr_cnt: 0 state: free 00:28:45.058 [2024-12-06 18:26:55.625673] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 30: 0 / 261120 wr_cnt: 0 state: free 00:28:45.058 [2024-12-06 18:26:55.625684] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 31: 0 / 261120 wr_cnt: 0 state: free 00:28:45.058 [2024-12-06 18:26:55.625695] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 32: 0 / 261120 wr_cnt: 0 state: free 00:28:45.058 [2024-12-06 18:26:55.625705] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 33: 0 / 261120 wr_cnt: 0 state: free 00:28:45.058 [2024-12-06 18:26:55.625715] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 34: 0 / 261120 wr_cnt: 0 state: free 00:28:45.058 [2024-12-06 18:26:55.625726] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 35: 0 / 261120 wr_cnt: 0 state: free 00:28:45.058 [2024-12-06 18:26:55.625736] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 36: 0 / 261120 wr_cnt: 0 state: free 00:28:45.058 [2024-12-06 18:26:55.625746] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 37: 0 / 261120 wr_cnt: 0 state: free 00:28:45.059 [2024-12-06 18:26:55.625757] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 38: 0 / 261120 wr_cnt: 0 state: free 00:28:45.059 [2024-12-06 18:26:55.625767] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 39: 0 / 261120 wr_cnt: 0 state: free 00:28:45.059 [2024-12-06 18:26:55.625777] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 40: 0 / 261120 wr_cnt: 0 state: free 00:28:45.059 [2024-12-06 18:26:55.625788] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 41: 0 / 261120 wr_cnt: 0 state: free 00:28:45.059 [2024-12-06 18:26:55.625798] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 42: 0 / 261120 wr_cnt: 0 state: free 00:28:45.059 [2024-12-06 18:26:55.625809] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 43: 0 / 261120 wr_cnt: 0 state: free 00:28:45.059 [2024-12-06 18:26:55.625819] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 44: 0 / 261120 wr_cnt: 0 state: free 00:28:45.059 [2024-12-06 18:26:55.625830] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 45: 0 / 261120 wr_cnt: 0 state: free 00:28:45.059 [2024-12-06 18:26:55.625840] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 46: 0 / 261120 wr_cnt: 0 state: free 00:28:45.059 [2024-12-06 18:26:55.625851] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 47: 0 / 261120 wr_cnt: 0 state: free 00:28:45.059 [2024-12-06 18:26:55.625863] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 48: 0 / 261120 wr_cnt: 0 state: free 00:28:45.059 [2024-12-06 18:26:55.625873] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 49: 0 / 261120 wr_cnt: 0 state: free 00:28:45.059 [2024-12-06 18:26:55.625884] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 50: 0 / 261120 wr_cnt: 0 state: free 00:28:45.059 [2024-12-06 18:26:55.625894] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 51: 0 / 261120 wr_cnt: 0 state: free 00:28:45.059 [2024-12-06 18:26:55.625904] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 52: 0 / 261120 wr_cnt: 0 state: free 00:28:45.059 [2024-12-06 18:26:55.625915] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 53: 0 / 261120 wr_cnt: 0 state: free 00:28:45.059 [2024-12-06 18:26:55.625926] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 54: 0 / 261120 wr_cnt: 0 state: free 00:28:45.059 [2024-12-06 18:26:55.625936] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 55: 0 / 261120 wr_cnt: 0 state: free 00:28:45.059 [2024-12-06 18:26:55.625946] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 56: 0 / 261120 wr_cnt: 0 state: free 00:28:45.059 [2024-12-06 18:26:55.625956] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 57: 0 / 261120 wr_cnt: 0 state: free 00:28:45.059 [2024-12-06 18:26:55.625967] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 58: 0 / 261120 wr_cnt: 0 state: free 00:28:45.059 [2024-12-06 18:26:55.625977] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 59: 0 / 261120 wr_cnt: 0 state: free 00:28:45.059 [2024-12-06 18:26:55.625987] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 60: 0 / 261120 wr_cnt: 0 state: free 00:28:45.059 [2024-12-06 18:26:55.625998] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 61: 0 / 261120 wr_cnt: 0 state: free 00:28:45.059 [2024-12-06 18:26:55.626008] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 62: 0 / 261120 wr_cnt: 0 state: free 00:28:45.059 [2024-12-06 18:26:55.626018] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 63: 0 / 261120 wr_cnt: 0 state: free 00:28:45.059 [2024-12-06 18:26:55.626029] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 64: 0 / 261120 wr_cnt: 0 state: free 00:28:45.059 [2024-12-06 18:26:55.626039] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 65: 0 / 261120 wr_cnt: 0 state: free 00:28:45.059 [2024-12-06 18:26:55.626049] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 66: 0 / 261120 wr_cnt: 0 state: free 00:28:45.059 [2024-12-06 18:26:55.626059] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 67: 0 / 261120 wr_cnt: 0 state: free 00:28:45.059 [2024-12-06 18:26:55.626069] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 68: 0 / 261120 wr_cnt: 0 state: free 00:28:45.059 [2024-12-06 18:26:55.626080] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 69: 0 / 261120 wr_cnt: 0 state: free 00:28:45.059 [2024-12-06 18:26:55.626091] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 70: 0 / 261120 wr_cnt: 0 state: free 00:28:45.059 [2024-12-06 18:26:55.626101] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 71: 0 / 261120 wr_cnt: 0 state: free 00:28:45.059 [2024-12-06 18:26:55.626111] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 72: 0 / 261120 wr_cnt: 0 state: free 00:28:45.059 [2024-12-06 18:26:55.626122] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 73: 0 / 261120 wr_cnt: 0 state: free 00:28:45.059 [2024-12-06 18:26:55.626132] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 74: 0 / 261120 wr_cnt: 0 state: free 00:28:45.059 [2024-12-06 18:26:55.626143] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 75: 0 / 261120 wr_cnt: 0 state: free 00:28:45.059 [2024-12-06 18:26:55.626153] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 76: 0 / 261120 wr_cnt: 0 state: free 00:28:45.059 [2024-12-06 18:26:55.626165] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 77: 0 / 261120 wr_cnt: 0 state: free 00:28:45.059 [2024-12-06 18:26:55.626176] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 78: 0 / 261120 wr_cnt: 0 state: free 00:28:45.059 [2024-12-06 18:26:55.626187] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 79: 0 / 261120 wr_cnt: 0 state: free 00:28:45.059 [2024-12-06 18:26:55.626197] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 80: 0 / 261120 wr_cnt: 0 state: free 00:28:45.059 [2024-12-06 18:26:55.626208] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 81: 0 / 261120 wr_cnt: 0 state: free 00:28:45.059 [2024-12-06 18:26:55.626218] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 82: 0 / 261120 wr_cnt: 0 state: free 00:28:45.059 [2024-12-06 18:26:55.626228] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 83: 0 / 261120 wr_cnt: 0 state: free 00:28:45.059 [2024-12-06 18:26:55.626239] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 84: 0 / 261120 wr_cnt: 0 state: free 00:28:45.059 [2024-12-06 18:26:55.626249] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 85: 0 / 261120 wr_cnt: 0 state: free 00:28:45.059 [2024-12-06 18:26:55.626260] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 86: 0 / 261120 wr_cnt: 0 state: free 00:28:45.059 [2024-12-06 18:26:55.626280] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 87: 0 / 261120 wr_cnt: 0 state: free 00:28:45.059 [2024-12-06 18:26:55.626291] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 88: 0 / 261120 wr_cnt: 0 state: free 00:28:45.059 [2024-12-06 18:26:55.626301] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 89: 0 / 261120 wr_cnt: 0 state: free 00:28:45.059 [2024-12-06 18:26:55.626312] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 90: 0 / 261120 wr_cnt: 0 state: free 00:28:45.059 [2024-12-06 18:26:55.626323] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 91: 0 / 261120 wr_cnt: 0 state: free 00:28:45.059 [2024-12-06 18:26:55.626333] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 92: 0 / 261120 wr_cnt: 0 state: free 00:28:45.059 [2024-12-06 18:26:55.626343] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 93: 0 / 261120 wr_cnt: 0 state: free 00:28:45.059 [2024-12-06 18:26:55.626353] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 94: 0 / 261120 wr_cnt: 0 state: free 00:28:45.059 [2024-12-06 18:26:55.626371] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 95: 0 / 261120 wr_cnt: 0 state: free 00:28:45.059 [2024-12-06 18:26:55.626382] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 96: 0 / 261120 wr_cnt: 0 state: free 00:28:45.059 [2024-12-06 18:26:55.626392] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 97: 0 / 261120 wr_cnt: 0 state: free 00:28:45.059 [2024-12-06 18:26:55.626403] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 98: 0 / 261120 wr_cnt: 0 state: free 00:28:45.059 [2024-12-06 18:26:55.626413] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 99: 0 / 261120 wr_cnt: 0 state: free 00:28:45.059 [2024-12-06 18:26:55.626424] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 100: 0 / 261120 wr_cnt: 0 state: free 00:28:45.059 [2024-12-06 18:26:55.626441] ftl_debug.c: 211:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] 00:28:45.059 [2024-12-06 18:26:55.626452] ftl_debug.c: 212:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] device UUID: 246e93d4-98de-4402-8d08-9ff86994df11 00:28:45.059 [2024-12-06 18:26:55.626463] ftl_debug.c: 213:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total valid LBAs: 131072 00:28:45.059 [2024-12-06 18:26:55.626473] ftl_debug.c: 214:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total writes: 17088 00:28:45.059 [2024-12-06 18:26:55.626483] ftl_debug.c: 215:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] user writes: 16128 00:28:45.059 [2024-12-06 18:26:55.626493] ftl_debug.c: 216:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] WAF: 1.0595 00:28:45.059 [2024-12-06 18:26:55.626508] ftl_debug.c: 218:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] limits: 00:28:45.059 [2024-12-06 18:26:55.626529] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] crit: 0 00:28:45.059 [2024-12-06 18:26:55.626539] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] high: 0 00:28:45.059 [2024-12-06 18:26:55.626549] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] low: 0 00:28:45.059 [2024-12-06 18:26:55.626557] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] start: 0 00:28:45.059 [2024-12-06 18:26:55.626567] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:45.059 [2024-12-06 18:26:55.626578] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Dump statistics 00:28:45.059 [2024-12-06 18:26:55.626587] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.228 ms 00:28:45.059 [2024-12-06 18:26:55.626597] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:45.317 [2024-12-06 18:26:55.646077] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:45.317 [2024-12-06 18:26:55.646116] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize L2P 00:28:45.317 [2024-12-06 18:26:55.646136] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 19.475 ms 00:28:45.317 [2024-12-06 18:26:55.646146] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:45.317 [2024-12-06 18:26:55.646695] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:45.317 [2024-12-06 18:26:55.646718] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize P2L checkpointing 00:28:45.317 [2024-12-06 18:26:55.646729] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.527 ms 00:28:45.317 [2024-12-06 18:26:55.646739] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:45.317 [2024-12-06 18:26:55.697467] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:28:45.317 [2024-12-06 18:26:55.697515] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:28:45.317 [2024-12-06 18:26:55.697529] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:28:45.317 [2024-12-06 18:26:55.697540] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:45.317 [2024-12-06 18:26:55.697601] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:28:45.317 [2024-12-06 18:26:55.697612] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:28:45.317 [2024-12-06 18:26:55.697623] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:28:45.317 [2024-12-06 18:26:55.697633] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:45.317 [2024-12-06 18:26:55.697721] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:28:45.317 [2024-12-06 18:26:55.697734] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:28:45.317 [2024-12-06 18:26:55.697750] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:28:45.317 [2024-12-06 18:26:55.697761] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:45.317 [2024-12-06 18:26:55.697778] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:28:45.317 [2024-12-06 18:26:55.697789] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:28:45.317 [2024-12-06 18:26:55.697799] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:28:45.317 [2024-12-06 18:26:55.697809] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:45.317 [2024-12-06 18:26:55.820421] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:28:45.317 [2024-12-06 18:26:55.820491] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:28:45.317 [2024-12-06 18:26:55.820507] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:28:45.317 [2024-12-06 18:26:55.820518] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:45.576 [2024-12-06 18:26:55.921695] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:28:45.576 [2024-12-06 18:26:55.921762] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:28:45.576 [2024-12-06 18:26:55.921778] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:28:45.576 [2024-12-06 18:26:55.921788] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:45.576 [2024-12-06 18:26:55.921875] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:28:45.576 [2024-12-06 18:26:55.921886] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:28:45.576 [2024-12-06 18:26:55.921897] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:28:45.576 [2024-12-06 18:26:55.921912] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:45.576 [2024-12-06 18:26:55.921955] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:28:45.576 [2024-12-06 18:26:55.921967] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:28:45.576 [2024-12-06 18:26:55.921977] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:28:45.576 [2024-12-06 18:26:55.921987] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:45.576 [2024-12-06 18:26:55.922087] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:28:45.576 [2024-12-06 18:26:55.922101] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:28:45.576 [2024-12-06 18:26:55.922111] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:28:45.576 [2024-12-06 18:26:55.922122] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:45.576 [2024-12-06 18:26:55.922160] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:28:45.576 [2024-12-06 18:26:55.922173] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize superblock 00:28:45.576 [2024-12-06 18:26:55.922182] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:28:45.576 [2024-12-06 18:26:55.922192] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:45.576 [2024-12-06 18:26:55.922229] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:28:45.576 [2024-12-06 18:26:55.922240] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:28:45.576 [2024-12-06 18:26:55.922250] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:28:45.576 [2024-12-06 18:26:55.922260] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:45.576 [2024-12-06 18:26:55.922332] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:28:45.576 [2024-12-06 18:26:55.922344] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:28:45.576 [2024-12-06 18:26:55.922354] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:28:45.576 [2024-12-06 18:26:55.922372] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:45.576 [2024-12-06 18:26:55.922497] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL shutdown', duration = 662.837 ms, result 0 00:28:46.515 00:28:46.515 00:28:46.515 18:26:56 ftl.ftl_restore -- ftl/restore.sh@82 -- # md5sum -c /home/vagrant/spdk_repo/spdk/test/ftl/testfile.md5 00:28:48.423 /home/vagrant/spdk_repo/spdk/test/ftl/testfile: OK 00:28:48.423 18:26:58 ftl.ftl_restore -- ftl/restore.sh@84 -- # trap - SIGINT SIGTERM EXIT 00:28:48.423 18:26:58 ftl.ftl_restore -- ftl/restore.sh@85 -- # restore_kill 00:28:48.423 18:26:58 ftl.ftl_restore -- ftl/restore.sh@28 -- # rm -f /home/vagrant/spdk_repo/spdk/test/ftl/testfile 00:28:48.423 18:26:58 ftl.ftl_restore -- ftl/restore.sh@29 -- # rm -f /home/vagrant/spdk_repo/spdk/test/ftl/testfile.md5 00:28:48.423 18:26:58 ftl.ftl_restore -- ftl/restore.sh@30 -- # rm -f /home/vagrant/spdk_repo/spdk/test/ftl/config/ftl.json 00:28:48.423 18:26:58 ftl.ftl_restore -- ftl/restore.sh@32 -- # killprocess 79007 00:28:48.423 Process with pid 79007 is not found 00:28:48.423 Remove shared memory files 00:28:48.423 18:26:58 ftl.ftl_restore -- common/autotest_common.sh@954 -- # '[' -z 79007 ']' 00:28:48.423 18:26:58 ftl.ftl_restore -- common/autotest_common.sh@958 -- # kill -0 79007 00:28:48.423 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 958: kill: (79007) - No such process 00:28:48.423 18:26:58 ftl.ftl_restore -- common/autotest_common.sh@981 -- # echo 'Process with pid 79007 is not found' 00:28:48.423 18:26:58 ftl.ftl_restore -- ftl/restore.sh@33 -- # remove_shm 00:28:48.423 18:26:58 ftl.ftl_restore -- ftl/common.sh@204 -- # echo Remove shared memory files 00:28:48.423 18:26:58 ftl.ftl_restore -- ftl/common.sh@205 -- # rm -f rm -f 00:28:48.423 18:26:58 ftl.ftl_restore -- ftl/common.sh@206 -- # rm -f rm -f 00:28:48.423 18:26:58 ftl.ftl_restore -- ftl/common.sh@207 -- # rm -f rm -f 00:28:48.423 18:26:58 ftl.ftl_restore -- ftl/common.sh@208 -- # rm -f rm -f /dev/shm/iscsi 00:28:48.423 18:26:58 ftl.ftl_restore -- ftl/common.sh@209 -- # rm -f rm -f 00:28:48.423 ************************************ 00:28:48.423 END TEST ftl_restore 00:28:48.423 ************************************ 00:28:48.423 00:28:48.423 real 3m4.939s 00:28:48.423 user 2m53.096s 00:28:48.423 sys 0m13.615s 00:28:48.423 18:26:58 ftl.ftl_restore -- common/autotest_common.sh@1130 -- # xtrace_disable 00:28:48.423 18:26:58 ftl.ftl_restore -- common/autotest_common.sh@10 -- # set +x 00:28:48.423 18:26:58 ftl -- ftl/ftl.sh@77 -- # run_test ftl_dirty_shutdown /home/vagrant/spdk_repo/spdk/test/ftl/dirty_shutdown.sh -c 0000:00:10.0 0000:00:11.0 00:28:48.423 18:26:58 ftl -- common/autotest_common.sh@1105 -- # '[' 5 -le 1 ']' 00:28:48.423 18:26:58 ftl -- common/autotest_common.sh@1111 -- # xtrace_disable 00:28:48.423 18:26:58 ftl -- common/autotest_common.sh@10 -- # set +x 00:28:48.423 ************************************ 00:28:48.423 START TEST ftl_dirty_shutdown 00:28:48.423 ************************************ 00:28:48.423 18:26:58 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/ftl/dirty_shutdown.sh -c 0000:00:10.0 0000:00:11.0 00:28:48.683 * Looking for test storage... 00:28:48.683 * Found test storage at /home/vagrant/spdk_repo/spdk/test/ftl 00:28:48.683 18:26:59 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:28:48.683 18:26:59 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:28:48.683 18:26:59 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1711 -- # lcov --version 00:28:48.683 18:26:59 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:28:48.683 18:26:59 ftl.ftl_dirty_shutdown -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:28:48.683 18:26:59 ftl.ftl_dirty_shutdown -- scripts/common.sh@333 -- # local ver1 ver1_l 00:28:48.683 18:26:59 ftl.ftl_dirty_shutdown -- scripts/common.sh@334 -- # local ver2 ver2_l 00:28:48.683 18:26:59 ftl.ftl_dirty_shutdown -- scripts/common.sh@336 -- # IFS=.-: 00:28:48.683 18:26:59 ftl.ftl_dirty_shutdown -- scripts/common.sh@336 -- # read -ra ver1 00:28:48.683 18:26:59 ftl.ftl_dirty_shutdown -- scripts/common.sh@337 -- # IFS=.-: 00:28:48.683 18:26:59 ftl.ftl_dirty_shutdown -- scripts/common.sh@337 -- # read -ra ver2 00:28:48.683 18:26:59 ftl.ftl_dirty_shutdown -- scripts/common.sh@338 -- # local 'op=<' 00:28:48.683 18:26:59 ftl.ftl_dirty_shutdown -- scripts/common.sh@340 -- # ver1_l=2 00:28:48.683 18:26:59 ftl.ftl_dirty_shutdown -- scripts/common.sh@341 -- # ver2_l=1 00:28:48.683 18:26:59 ftl.ftl_dirty_shutdown -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:28:48.683 18:26:59 ftl.ftl_dirty_shutdown -- scripts/common.sh@344 -- # case "$op" in 00:28:48.683 18:26:59 ftl.ftl_dirty_shutdown -- scripts/common.sh@345 -- # : 1 00:28:48.683 18:26:59 ftl.ftl_dirty_shutdown -- scripts/common.sh@364 -- # (( v = 0 )) 00:28:48.683 18:26:59 ftl.ftl_dirty_shutdown -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:28:48.683 18:26:59 ftl.ftl_dirty_shutdown -- scripts/common.sh@365 -- # decimal 1 00:28:48.683 18:26:59 ftl.ftl_dirty_shutdown -- scripts/common.sh@353 -- # local d=1 00:28:48.683 18:26:59 ftl.ftl_dirty_shutdown -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:28:48.683 18:26:59 ftl.ftl_dirty_shutdown -- scripts/common.sh@355 -- # echo 1 00:28:48.683 18:26:59 ftl.ftl_dirty_shutdown -- scripts/common.sh@365 -- # ver1[v]=1 00:28:48.683 18:26:59 ftl.ftl_dirty_shutdown -- scripts/common.sh@366 -- # decimal 2 00:28:48.683 18:26:59 ftl.ftl_dirty_shutdown -- scripts/common.sh@353 -- # local d=2 00:28:48.683 18:26:59 ftl.ftl_dirty_shutdown -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:28:48.684 18:26:59 ftl.ftl_dirty_shutdown -- scripts/common.sh@355 -- # echo 2 00:28:48.684 18:26:59 ftl.ftl_dirty_shutdown -- scripts/common.sh@366 -- # ver2[v]=2 00:28:48.684 18:26:59 ftl.ftl_dirty_shutdown -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:28:48.684 18:26:59 ftl.ftl_dirty_shutdown -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:28:48.684 18:26:59 ftl.ftl_dirty_shutdown -- scripts/common.sh@368 -- # return 0 00:28:48.684 18:26:59 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:28:48.684 18:26:59 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:28:48.684 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:28:48.684 --rc genhtml_branch_coverage=1 00:28:48.684 --rc genhtml_function_coverage=1 00:28:48.684 --rc genhtml_legend=1 00:28:48.684 --rc geninfo_all_blocks=1 00:28:48.684 --rc geninfo_unexecuted_blocks=1 00:28:48.684 00:28:48.684 ' 00:28:48.684 18:26:59 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:28:48.684 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:28:48.684 --rc genhtml_branch_coverage=1 00:28:48.684 --rc genhtml_function_coverage=1 00:28:48.684 --rc genhtml_legend=1 00:28:48.684 --rc geninfo_all_blocks=1 00:28:48.684 --rc geninfo_unexecuted_blocks=1 00:28:48.684 00:28:48.684 ' 00:28:48.684 18:26:59 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:28:48.684 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:28:48.684 --rc genhtml_branch_coverage=1 00:28:48.684 --rc genhtml_function_coverage=1 00:28:48.684 --rc genhtml_legend=1 00:28:48.684 --rc geninfo_all_blocks=1 00:28:48.684 --rc geninfo_unexecuted_blocks=1 00:28:48.684 00:28:48.684 ' 00:28:48.684 18:26:59 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:28:48.684 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:28:48.684 --rc genhtml_branch_coverage=1 00:28:48.684 --rc genhtml_function_coverage=1 00:28:48.684 --rc genhtml_legend=1 00:28:48.684 --rc geninfo_all_blocks=1 00:28:48.684 --rc geninfo_unexecuted_blocks=1 00:28:48.684 00:28:48.684 ' 00:28:48.684 18:26:59 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/ftl/common.sh 00:28:48.684 18:26:59 ftl.ftl_dirty_shutdown -- ftl/common.sh@8 -- # dirname /home/vagrant/spdk_repo/spdk/test/ftl/dirty_shutdown.sh 00:28:48.684 18:26:59 ftl.ftl_dirty_shutdown -- ftl/common.sh@8 -- # readlink -f /home/vagrant/spdk_repo/spdk/test/ftl 00:28:48.684 18:26:59 ftl.ftl_dirty_shutdown -- ftl/common.sh@8 -- # testdir=/home/vagrant/spdk_repo/spdk/test/ftl 00:28:48.684 18:26:59 ftl.ftl_dirty_shutdown -- ftl/common.sh@9 -- # readlink -f /home/vagrant/spdk_repo/spdk/test/ftl/../.. 00:28:48.684 18:26:59 ftl.ftl_dirty_shutdown -- ftl/common.sh@9 -- # rootdir=/home/vagrant/spdk_repo/spdk 00:28:48.684 18:26:59 ftl.ftl_dirty_shutdown -- ftl/common.sh@10 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:28:48.684 18:26:59 ftl.ftl_dirty_shutdown -- ftl/common.sh@12 -- # export 'ftl_tgt_core_mask=[0]' 00:28:48.684 18:26:59 ftl.ftl_dirty_shutdown -- ftl/common.sh@12 -- # ftl_tgt_core_mask='[0]' 00:28:48.684 18:26:59 ftl.ftl_dirty_shutdown -- ftl/common.sh@14 -- # export spdk_tgt_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:28:48.684 18:26:59 ftl.ftl_dirty_shutdown -- ftl/common.sh@14 -- # spdk_tgt_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:28:48.684 18:26:59 ftl.ftl_dirty_shutdown -- ftl/common.sh@15 -- # export 'spdk_tgt_cpumask=[0]' 00:28:48.684 18:26:59 ftl.ftl_dirty_shutdown -- ftl/common.sh@15 -- # spdk_tgt_cpumask='[0]' 00:28:48.684 18:26:59 ftl.ftl_dirty_shutdown -- ftl/common.sh@16 -- # export spdk_tgt_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json 00:28:48.684 18:26:59 ftl.ftl_dirty_shutdown -- ftl/common.sh@16 -- # spdk_tgt_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json 00:28:48.684 18:26:59 ftl.ftl_dirty_shutdown -- ftl/common.sh@17 -- # export spdk_tgt_pid= 00:28:48.684 18:26:59 ftl.ftl_dirty_shutdown -- ftl/common.sh@17 -- # spdk_tgt_pid= 00:28:48.684 18:26:59 ftl.ftl_dirty_shutdown -- ftl/common.sh@19 -- # export spdk_ini_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:28:48.684 18:26:59 ftl.ftl_dirty_shutdown -- ftl/common.sh@19 -- # spdk_ini_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:28:48.684 18:26:59 ftl.ftl_dirty_shutdown -- ftl/common.sh@20 -- # export 'spdk_ini_cpumask=[1]' 00:28:48.684 18:26:59 ftl.ftl_dirty_shutdown -- ftl/common.sh@20 -- # spdk_ini_cpumask='[1]' 00:28:48.684 18:26:59 ftl.ftl_dirty_shutdown -- ftl/common.sh@21 -- # export spdk_ini_rpc=/var/tmp/spdk.tgt.sock 00:28:48.684 18:26:59 ftl.ftl_dirty_shutdown -- ftl/common.sh@21 -- # spdk_ini_rpc=/var/tmp/spdk.tgt.sock 00:28:48.684 18:26:59 ftl.ftl_dirty_shutdown -- ftl/common.sh@22 -- # export spdk_ini_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json 00:28:48.684 18:26:59 ftl.ftl_dirty_shutdown -- ftl/common.sh@22 -- # spdk_ini_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json 00:28:48.684 18:26:59 ftl.ftl_dirty_shutdown -- ftl/common.sh@23 -- # export spdk_ini_pid= 00:28:48.684 18:26:59 ftl.ftl_dirty_shutdown -- ftl/common.sh@23 -- # spdk_ini_pid= 00:28:48.684 18:26:59 ftl.ftl_dirty_shutdown -- ftl/common.sh@25 -- # export spdk_dd_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:28:48.684 18:26:59 ftl.ftl_dirty_shutdown -- ftl/common.sh@25 -- # spdk_dd_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:28:48.684 18:26:59 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@11 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:28:48.684 18:26:59 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@12 -- # spdk_dd=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:28:48.684 18:26:59 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@14 -- # getopts :u:c: opt 00:28:48.684 18:26:59 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@15 -- # case $opt in 00:28:48.684 18:26:59 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@17 -- # nv_cache=0000:00:10.0 00:28:48.684 18:26:59 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@14 -- # getopts :u:c: opt 00:28:48.684 18:26:59 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@21 -- # shift 2 00:28:48.684 18:26:59 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@23 -- # device=0000:00:11.0 00:28:48.684 18:26:59 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@24 -- # timeout=240 00:28:48.684 18:26:59 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@26 -- # block_size=4096 00:28:48.684 18:26:59 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@27 -- # chunk_size=262144 00:28:48.684 18:26:59 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@28 -- # data_size=262144 00:28:48.684 18:26:59 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@42 -- # trap 'restore_kill; exit 1' SIGINT SIGTERM EXIT 00:28:48.684 18:26:59 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@45 -- # svcpid=80976 00:28:48.684 18:26:59 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@47 -- # waitforlisten 80976 00:28:48.684 18:26:59 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@44 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 00:28:48.684 18:26:59 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@835 -- # '[' -z 80976 ']' 00:28:48.684 18:26:59 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:28:48.684 18:26:59 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@840 -- # local max_retries=100 00:28:48.684 18:26:59 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:28:48.684 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:28:48.684 18:26:59 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@844 -- # xtrace_disable 00:28:48.684 18:26:59 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@10 -- # set +x 00:28:48.942 [2024-12-06 18:26:59.350288] Starting SPDK v25.01-pre git sha1 1148849d6 / DPDK 24.03.0 initialization... 00:28:48.942 [2024-12-06 18:26:59.350407] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid80976 ] 00:28:49.200 [2024-12-06 18:26:59.533728] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:28:49.200 [2024-12-06 18:26:59.655102] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:28:50.137 18:27:00 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:28:50.137 18:27:00 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@868 -- # return 0 00:28:50.137 18:27:00 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@49 -- # create_base_bdev nvme0 0000:00:11.0 103424 00:28:50.137 18:27:00 ftl.ftl_dirty_shutdown -- ftl/common.sh@54 -- # local name=nvme0 00:28:50.137 18:27:00 ftl.ftl_dirty_shutdown -- ftl/common.sh@55 -- # local base_bdf=0000:00:11.0 00:28:50.137 18:27:00 ftl.ftl_dirty_shutdown -- ftl/common.sh@56 -- # local size=103424 00:28:50.137 18:27:00 ftl.ftl_dirty_shutdown -- ftl/common.sh@59 -- # local base_bdev 00:28:50.137 18:27:00 ftl.ftl_dirty_shutdown -- ftl/common.sh@60 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_nvme_attach_controller -b nvme0 -t PCIe -a 0000:00:11.0 00:28:50.396 18:27:00 ftl.ftl_dirty_shutdown -- ftl/common.sh@60 -- # base_bdev=nvme0n1 00:28:50.396 18:27:00 ftl.ftl_dirty_shutdown -- ftl/common.sh@62 -- # local base_size 00:28:50.396 18:27:00 ftl.ftl_dirty_shutdown -- ftl/common.sh@63 -- # get_bdev_size nvme0n1 00:28:50.396 18:27:00 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1382 -- # local bdev_name=nvme0n1 00:28:50.396 18:27:00 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1383 -- # local bdev_info 00:28:50.396 18:27:00 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1384 -- # local bs 00:28:50.396 18:27:00 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1385 -- # local nb 00:28:50.396 18:27:00 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1386 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b nvme0n1 00:28:50.655 18:27:01 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1386 -- # bdev_info='[ 00:28:50.655 { 00:28:50.655 "name": "nvme0n1", 00:28:50.655 "aliases": [ 00:28:50.655 "4ab701be-19d4-4a0d-bffb-d15b4c455e7d" 00:28:50.656 ], 00:28:50.656 "product_name": "NVMe disk", 00:28:50.656 "block_size": 4096, 00:28:50.656 "num_blocks": 1310720, 00:28:50.656 "uuid": "4ab701be-19d4-4a0d-bffb-d15b4c455e7d", 00:28:50.656 "numa_id": -1, 00:28:50.656 "assigned_rate_limits": { 00:28:50.656 "rw_ios_per_sec": 0, 00:28:50.656 "rw_mbytes_per_sec": 0, 00:28:50.656 "r_mbytes_per_sec": 0, 00:28:50.656 "w_mbytes_per_sec": 0 00:28:50.656 }, 00:28:50.656 "claimed": true, 00:28:50.656 "claim_type": "read_many_write_one", 00:28:50.656 "zoned": false, 00:28:50.656 "supported_io_types": { 00:28:50.656 "read": true, 00:28:50.656 "write": true, 00:28:50.656 "unmap": true, 00:28:50.656 "flush": true, 00:28:50.656 "reset": true, 00:28:50.656 "nvme_admin": true, 00:28:50.656 "nvme_io": true, 00:28:50.656 "nvme_io_md": false, 00:28:50.656 "write_zeroes": true, 00:28:50.656 "zcopy": false, 00:28:50.656 "get_zone_info": false, 00:28:50.656 "zone_management": false, 00:28:50.656 "zone_append": false, 00:28:50.656 "compare": true, 00:28:50.656 "compare_and_write": false, 00:28:50.656 "abort": true, 00:28:50.656 "seek_hole": false, 00:28:50.656 "seek_data": false, 00:28:50.656 "copy": true, 00:28:50.656 "nvme_iov_md": false 00:28:50.656 }, 00:28:50.656 "driver_specific": { 00:28:50.656 "nvme": [ 00:28:50.656 { 00:28:50.656 "pci_address": "0000:00:11.0", 00:28:50.656 "trid": { 00:28:50.656 "trtype": "PCIe", 00:28:50.656 "traddr": "0000:00:11.0" 00:28:50.656 }, 00:28:50.656 "ctrlr_data": { 00:28:50.656 "cntlid": 0, 00:28:50.656 "vendor_id": "0x1b36", 00:28:50.656 "model_number": "QEMU NVMe Ctrl", 00:28:50.656 "serial_number": "12341", 00:28:50.656 "firmware_revision": "8.0.0", 00:28:50.656 "subnqn": "nqn.2019-08.org.qemu:12341", 00:28:50.656 "oacs": { 00:28:50.656 "security": 0, 00:28:50.656 "format": 1, 00:28:50.656 "firmware": 0, 00:28:50.656 "ns_manage": 1 00:28:50.656 }, 00:28:50.656 "multi_ctrlr": false, 00:28:50.656 "ana_reporting": false 00:28:50.656 }, 00:28:50.656 "vs": { 00:28:50.656 "nvme_version": "1.4" 00:28:50.656 }, 00:28:50.656 "ns_data": { 00:28:50.656 "id": 1, 00:28:50.656 "can_share": false 00:28:50.656 } 00:28:50.656 } 00:28:50.656 ], 00:28:50.656 "mp_policy": "active_passive" 00:28:50.656 } 00:28:50.656 } 00:28:50.656 ]' 00:28:50.656 18:27:01 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1387 -- # jq '.[] .block_size' 00:28:50.656 18:27:01 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1387 -- # bs=4096 00:28:50.656 18:27:01 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1388 -- # jq '.[] .num_blocks' 00:28:50.656 18:27:01 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1388 -- # nb=1310720 00:28:50.656 18:27:01 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1391 -- # bdev_size=5120 00:28:50.656 18:27:01 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1392 -- # echo 5120 00:28:50.656 18:27:01 ftl.ftl_dirty_shutdown -- ftl/common.sh@63 -- # base_size=5120 00:28:50.656 18:27:01 ftl.ftl_dirty_shutdown -- ftl/common.sh@64 -- # [[ 103424 -le 5120 ]] 00:28:50.656 18:27:01 ftl.ftl_dirty_shutdown -- ftl/common.sh@67 -- # clear_lvols 00:28:50.656 18:27:01 ftl.ftl_dirty_shutdown -- ftl/common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores 00:28:50.656 18:27:01 ftl.ftl_dirty_shutdown -- ftl/common.sh@28 -- # jq -r '.[] | .uuid' 00:28:50.915 18:27:01 ftl.ftl_dirty_shutdown -- ftl/common.sh@28 -- # stores=7809b67a-7baf-4fcc-82bd-336c5bf2c14d 00:28:50.915 18:27:01 ftl.ftl_dirty_shutdown -- ftl/common.sh@29 -- # for lvs in $stores 00:28:50.915 18:27:01 ftl.ftl_dirty_shutdown -- ftl/common.sh@30 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -u 7809b67a-7baf-4fcc-82bd-336c5bf2c14d 00:28:51.174 18:27:01 ftl.ftl_dirty_shutdown -- ftl/common.sh@68 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create_lvstore nvme0n1 lvs 00:28:51.433 18:27:01 ftl.ftl_dirty_shutdown -- ftl/common.sh@68 -- # lvs=15d3342c-e24b-40a1-9820-baeb65fb0e76 00:28:51.433 18:27:01 ftl.ftl_dirty_shutdown -- ftl/common.sh@69 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create nvme0n1p0 103424 -t -u 15d3342c-e24b-40a1-9820-baeb65fb0e76 00:28:51.692 18:27:02 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@49 -- # split_bdev=03853a72-5ee0-4f38-8b23-b2e64a3fe5af 00:28:51.692 18:27:02 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@51 -- # '[' -n 0000:00:10.0 ']' 00:28:51.692 18:27:02 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@52 -- # create_nv_cache_bdev nvc0 0000:00:10.0 03853a72-5ee0-4f38-8b23-b2e64a3fe5af 00:28:51.692 18:27:02 ftl.ftl_dirty_shutdown -- ftl/common.sh@35 -- # local name=nvc0 00:28:51.692 18:27:02 ftl.ftl_dirty_shutdown -- ftl/common.sh@36 -- # local cache_bdf=0000:00:10.0 00:28:51.692 18:27:02 ftl.ftl_dirty_shutdown -- ftl/common.sh@37 -- # local base_bdev=03853a72-5ee0-4f38-8b23-b2e64a3fe5af 00:28:51.692 18:27:02 ftl.ftl_dirty_shutdown -- ftl/common.sh@38 -- # local cache_size= 00:28:51.692 18:27:02 ftl.ftl_dirty_shutdown -- ftl/common.sh@41 -- # get_bdev_size 03853a72-5ee0-4f38-8b23-b2e64a3fe5af 00:28:51.692 18:27:02 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1382 -- # local bdev_name=03853a72-5ee0-4f38-8b23-b2e64a3fe5af 00:28:51.692 18:27:02 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1383 -- # local bdev_info 00:28:51.692 18:27:02 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1384 -- # local bs 00:28:51.692 18:27:02 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1385 -- # local nb 00:28:51.692 18:27:02 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1386 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b 03853a72-5ee0-4f38-8b23-b2e64a3fe5af 00:28:51.952 18:27:02 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1386 -- # bdev_info='[ 00:28:51.952 { 00:28:51.952 "name": "03853a72-5ee0-4f38-8b23-b2e64a3fe5af", 00:28:51.952 "aliases": [ 00:28:51.952 "lvs/nvme0n1p0" 00:28:51.952 ], 00:28:51.952 "product_name": "Logical Volume", 00:28:51.952 "block_size": 4096, 00:28:51.952 "num_blocks": 26476544, 00:28:51.952 "uuid": "03853a72-5ee0-4f38-8b23-b2e64a3fe5af", 00:28:51.952 "assigned_rate_limits": { 00:28:51.952 "rw_ios_per_sec": 0, 00:28:51.952 "rw_mbytes_per_sec": 0, 00:28:51.952 "r_mbytes_per_sec": 0, 00:28:51.952 "w_mbytes_per_sec": 0 00:28:51.952 }, 00:28:51.952 "claimed": false, 00:28:51.952 "zoned": false, 00:28:51.952 "supported_io_types": { 00:28:51.952 "read": true, 00:28:51.952 "write": true, 00:28:51.952 "unmap": true, 00:28:51.952 "flush": false, 00:28:51.952 "reset": true, 00:28:51.952 "nvme_admin": false, 00:28:51.952 "nvme_io": false, 00:28:51.952 "nvme_io_md": false, 00:28:51.952 "write_zeroes": true, 00:28:51.952 "zcopy": false, 00:28:51.952 "get_zone_info": false, 00:28:51.952 "zone_management": false, 00:28:51.952 "zone_append": false, 00:28:51.952 "compare": false, 00:28:51.952 "compare_and_write": false, 00:28:51.952 "abort": false, 00:28:51.952 "seek_hole": true, 00:28:51.952 "seek_data": true, 00:28:51.952 "copy": false, 00:28:51.952 "nvme_iov_md": false 00:28:51.952 }, 00:28:51.952 "driver_specific": { 00:28:51.952 "lvol": { 00:28:51.952 "lvol_store_uuid": "15d3342c-e24b-40a1-9820-baeb65fb0e76", 00:28:51.952 "base_bdev": "nvme0n1", 00:28:51.952 "thin_provision": true, 00:28:51.952 "num_allocated_clusters": 0, 00:28:51.952 "snapshot": false, 00:28:51.952 "clone": false, 00:28:51.952 "esnap_clone": false 00:28:51.952 } 00:28:51.952 } 00:28:51.952 } 00:28:51.952 ]' 00:28:51.952 18:27:02 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1387 -- # jq '.[] .block_size' 00:28:51.952 18:27:02 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1387 -- # bs=4096 00:28:51.953 18:27:02 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1388 -- # jq '.[] .num_blocks' 00:28:51.953 18:27:02 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1388 -- # nb=26476544 00:28:51.953 18:27:02 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1391 -- # bdev_size=103424 00:28:51.953 18:27:02 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1392 -- # echo 103424 00:28:51.953 18:27:02 ftl.ftl_dirty_shutdown -- ftl/common.sh@41 -- # local base_size=5171 00:28:51.953 18:27:02 ftl.ftl_dirty_shutdown -- ftl/common.sh@44 -- # local nvc_bdev 00:28:51.953 18:27:02 ftl.ftl_dirty_shutdown -- ftl/common.sh@45 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_nvme_attach_controller -b nvc0 -t PCIe -a 0000:00:10.0 00:28:52.212 18:27:02 ftl.ftl_dirty_shutdown -- ftl/common.sh@45 -- # nvc_bdev=nvc0n1 00:28:52.212 18:27:02 ftl.ftl_dirty_shutdown -- ftl/common.sh@47 -- # [[ -z '' ]] 00:28:52.212 18:27:02 ftl.ftl_dirty_shutdown -- ftl/common.sh@48 -- # get_bdev_size 03853a72-5ee0-4f38-8b23-b2e64a3fe5af 00:28:52.212 18:27:02 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1382 -- # local bdev_name=03853a72-5ee0-4f38-8b23-b2e64a3fe5af 00:28:52.212 18:27:02 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1383 -- # local bdev_info 00:28:52.212 18:27:02 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1384 -- # local bs 00:28:52.212 18:27:02 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1385 -- # local nb 00:28:52.212 18:27:02 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1386 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b 03853a72-5ee0-4f38-8b23-b2e64a3fe5af 00:28:52.471 18:27:02 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1386 -- # bdev_info='[ 00:28:52.471 { 00:28:52.471 "name": "03853a72-5ee0-4f38-8b23-b2e64a3fe5af", 00:28:52.471 "aliases": [ 00:28:52.471 "lvs/nvme0n1p0" 00:28:52.471 ], 00:28:52.471 "product_name": "Logical Volume", 00:28:52.471 "block_size": 4096, 00:28:52.471 "num_blocks": 26476544, 00:28:52.471 "uuid": "03853a72-5ee0-4f38-8b23-b2e64a3fe5af", 00:28:52.471 "assigned_rate_limits": { 00:28:52.471 "rw_ios_per_sec": 0, 00:28:52.471 "rw_mbytes_per_sec": 0, 00:28:52.471 "r_mbytes_per_sec": 0, 00:28:52.471 "w_mbytes_per_sec": 0 00:28:52.471 }, 00:28:52.471 "claimed": false, 00:28:52.471 "zoned": false, 00:28:52.471 "supported_io_types": { 00:28:52.471 "read": true, 00:28:52.471 "write": true, 00:28:52.471 "unmap": true, 00:28:52.471 "flush": false, 00:28:52.471 "reset": true, 00:28:52.471 "nvme_admin": false, 00:28:52.471 "nvme_io": false, 00:28:52.471 "nvme_io_md": false, 00:28:52.471 "write_zeroes": true, 00:28:52.471 "zcopy": false, 00:28:52.471 "get_zone_info": false, 00:28:52.471 "zone_management": false, 00:28:52.471 "zone_append": false, 00:28:52.471 "compare": false, 00:28:52.471 "compare_and_write": false, 00:28:52.471 "abort": false, 00:28:52.471 "seek_hole": true, 00:28:52.471 "seek_data": true, 00:28:52.471 "copy": false, 00:28:52.471 "nvme_iov_md": false 00:28:52.471 }, 00:28:52.471 "driver_specific": { 00:28:52.471 "lvol": { 00:28:52.471 "lvol_store_uuid": "15d3342c-e24b-40a1-9820-baeb65fb0e76", 00:28:52.471 "base_bdev": "nvme0n1", 00:28:52.472 "thin_provision": true, 00:28:52.472 "num_allocated_clusters": 0, 00:28:52.472 "snapshot": false, 00:28:52.472 "clone": false, 00:28:52.472 "esnap_clone": false 00:28:52.472 } 00:28:52.472 } 00:28:52.472 } 00:28:52.472 ]' 00:28:52.472 18:27:02 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1387 -- # jq '.[] .block_size' 00:28:52.472 18:27:02 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1387 -- # bs=4096 00:28:52.472 18:27:02 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1388 -- # jq '.[] .num_blocks' 00:28:52.472 18:27:02 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1388 -- # nb=26476544 00:28:52.472 18:27:02 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1391 -- # bdev_size=103424 00:28:52.472 18:27:02 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1392 -- # echo 103424 00:28:52.472 18:27:02 ftl.ftl_dirty_shutdown -- ftl/common.sh@48 -- # cache_size=5171 00:28:52.472 18:27:02 ftl.ftl_dirty_shutdown -- ftl/common.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_split_create nvc0n1 -s 5171 1 00:28:52.731 18:27:03 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@52 -- # nvc_bdev=nvc0n1p0 00:28:52.731 18:27:03 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@55 -- # get_bdev_size 03853a72-5ee0-4f38-8b23-b2e64a3fe5af 00:28:52.731 18:27:03 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1382 -- # local bdev_name=03853a72-5ee0-4f38-8b23-b2e64a3fe5af 00:28:52.731 18:27:03 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1383 -- # local bdev_info 00:28:52.731 18:27:03 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1384 -- # local bs 00:28:52.731 18:27:03 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1385 -- # local nb 00:28:52.731 18:27:03 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1386 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b 03853a72-5ee0-4f38-8b23-b2e64a3fe5af 00:28:52.989 18:27:03 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1386 -- # bdev_info='[ 00:28:52.989 { 00:28:52.989 "name": "03853a72-5ee0-4f38-8b23-b2e64a3fe5af", 00:28:52.990 "aliases": [ 00:28:52.990 "lvs/nvme0n1p0" 00:28:52.990 ], 00:28:52.990 "product_name": "Logical Volume", 00:28:52.990 "block_size": 4096, 00:28:52.990 "num_blocks": 26476544, 00:28:52.990 "uuid": "03853a72-5ee0-4f38-8b23-b2e64a3fe5af", 00:28:52.990 "assigned_rate_limits": { 00:28:52.990 "rw_ios_per_sec": 0, 00:28:52.990 "rw_mbytes_per_sec": 0, 00:28:52.990 "r_mbytes_per_sec": 0, 00:28:52.990 "w_mbytes_per_sec": 0 00:28:52.990 }, 00:28:52.990 "claimed": false, 00:28:52.990 "zoned": false, 00:28:52.990 "supported_io_types": { 00:28:52.990 "read": true, 00:28:52.990 "write": true, 00:28:52.990 "unmap": true, 00:28:52.990 "flush": false, 00:28:52.990 "reset": true, 00:28:52.990 "nvme_admin": false, 00:28:52.990 "nvme_io": false, 00:28:52.990 "nvme_io_md": false, 00:28:52.990 "write_zeroes": true, 00:28:52.990 "zcopy": false, 00:28:52.990 "get_zone_info": false, 00:28:52.990 "zone_management": false, 00:28:52.990 "zone_append": false, 00:28:52.990 "compare": false, 00:28:52.990 "compare_and_write": false, 00:28:52.990 "abort": false, 00:28:52.990 "seek_hole": true, 00:28:52.990 "seek_data": true, 00:28:52.990 "copy": false, 00:28:52.990 "nvme_iov_md": false 00:28:52.990 }, 00:28:52.990 "driver_specific": { 00:28:52.990 "lvol": { 00:28:52.990 "lvol_store_uuid": "15d3342c-e24b-40a1-9820-baeb65fb0e76", 00:28:52.990 "base_bdev": "nvme0n1", 00:28:52.990 "thin_provision": true, 00:28:52.990 "num_allocated_clusters": 0, 00:28:52.990 "snapshot": false, 00:28:52.990 "clone": false, 00:28:52.990 "esnap_clone": false 00:28:52.990 } 00:28:52.990 } 00:28:52.990 } 00:28:52.990 ]' 00:28:52.990 18:27:03 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1387 -- # jq '.[] .block_size' 00:28:52.990 18:27:03 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1387 -- # bs=4096 00:28:52.990 18:27:03 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1388 -- # jq '.[] .num_blocks' 00:28:52.990 18:27:03 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1388 -- # nb=26476544 00:28:52.990 18:27:03 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1391 -- # bdev_size=103424 00:28:52.990 18:27:03 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1392 -- # echo 103424 00:28:52.990 18:27:03 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@55 -- # l2p_dram_size_mb=10 00:28:52.990 18:27:03 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@56 -- # ftl_construct_args='bdev_ftl_create -b ftl0 -d 03853a72-5ee0-4f38-8b23-b2e64a3fe5af --l2p_dram_limit 10' 00:28:52.990 18:27:03 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@58 -- # '[' -n '' ']' 00:28:52.990 18:27:03 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@59 -- # '[' -n 0000:00:10.0 ']' 00:28:52.990 18:27:03 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@59 -- # ftl_construct_args+=' -c nvc0n1p0' 00:28:52.990 18:27:03 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@61 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -t 240 bdev_ftl_create -b ftl0 -d 03853a72-5ee0-4f38-8b23-b2e64a3fe5af --l2p_dram_limit 10 -c nvc0n1p0 00:28:53.250 [2024-12-06 18:27:03.638665] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:53.250 [2024-12-06 18:27:03.638724] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Check configuration 00:28:53.250 [2024-12-06 18:27:03.638744] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.004 ms 00:28:53.250 [2024-12-06 18:27:03.638755] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:53.250 [2024-12-06 18:27:03.638823] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:53.250 [2024-12-06 18:27:03.638836] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:28:53.250 [2024-12-06 18:27:03.638849] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.046 ms 00:28:53.250 [2024-12-06 18:27:03.638860] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:53.250 [2024-12-06 18:27:03.638890] mngt/ftl_mngt_bdev.c: 196:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using nvc0n1p0 as write buffer cache 00:28:53.250 [2024-12-06 18:27:03.639951] mngt/ftl_mngt_bdev.c: 236:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using bdev as NV Cache device 00:28:53.250 [2024-12-06 18:27:03.639986] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:53.250 [2024-12-06 18:27:03.639998] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:28:53.250 [2024-12-06 18:27:03.640012] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.105 ms 00:28:53.250 [2024-12-06 18:27:03.640023] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:53.250 [2024-12-06 18:27:03.640104] mngt/ftl_mngt_md.c: 570:ftl_mngt_superblock_init: *NOTICE*: [FTL][ftl0] Create new FTL, UUID 0fd165a4-a7ae-4b12-8280-ae23d7c38836 00:28:53.250 [2024-12-06 18:27:03.641523] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:53.250 [2024-12-06 18:27:03.641556] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Default-initialize superblock 00:28:53.250 [2024-12-06 18:27:03.641570] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.015 ms 00:28:53.250 [2024-12-06 18:27:03.641582] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:53.250 [2024-12-06 18:27:03.648964] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:53.250 [2024-12-06 18:27:03.649166] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:28:53.250 [2024-12-06 18:27:03.649188] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 7.341 ms 00:28:53.250 [2024-12-06 18:27:03.649201] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:53.250 [2024-12-06 18:27:03.649323] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:53.250 [2024-12-06 18:27:03.649340] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:28:53.250 [2024-12-06 18:27:03.649352] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.091 ms 00:28:53.250 [2024-12-06 18:27:03.649370] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:53.250 [2024-12-06 18:27:03.649428] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:53.250 [2024-12-06 18:27:03.649443] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Register IO device 00:28:53.250 [2024-12-06 18:27:03.649457] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.008 ms 00:28:53.250 [2024-12-06 18:27:03.649469] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:53.250 [2024-12-06 18:27:03.649494] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:28:53.250 [2024-12-06 18:27:03.654260] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:53.250 [2024-12-06 18:27:03.654306] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:28:53.250 [2024-12-06 18:27:03.654323] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.777 ms 00:28:53.250 [2024-12-06 18:27:03.654333] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:53.250 [2024-12-06 18:27:03.654381] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:53.250 [2024-12-06 18:27:03.654393] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Decorate bands 00:28:53.250 [2024-12-06 18:27:03.654407] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.010 ms 00:28:53.250 [2024-12-06 18:27:03.654418] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:53.250 [2024-12-06 18:27:03.654463] ftl_layout.c: 613:ftl_layout_setup: *NOTICE*: [FTL][ftl0] FTL layout setup mode 1 00:28:53.250 [2024-12-06 18:27:03.654597] upgrade/ftl_sb_v5.c: 92:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob store 0x150 bytes 00:28:53.250 [2024-12-06 18:27:03.654617] upgrade/ftl_sb_v5.c: 101:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] base layout blob store 0x48 bytes 00:28:53.250 [2024-12-06 18:27:03.654630] upgrade/ftl_sb_v5.c: 109:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] layout blob store 0x190 bytes 00:28:53.250 [2024-12-06 18:27:03.654646] ftl_layout.c: 685:ftl_layout_setup: *NOTICE*: [FTL][ftl0] Base device capacity: 103424.00 MiB 00:28:53.250 [2024-12-06 18:27:03.654657] ftl_layout.c: 687:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache device capacity: 5171.00 MiB 00:28:53.250 [2024-12-06 18:27:03.654671] ftl_layout.c: 689:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P entries: 20971520 00:28:53.250 [2024-12-06 18:27:03.654681] ftl_layout.c: 690:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P address size: 4 00:28:53.250 [2024-12-06 18:27:03.654698] ftl_layout.c: 691:ftl_layout_setup: *NOTICE*: [FTL][ftl0] P2L checkpoint pages: 2048 00:28:53.250 [2024-12-06 18:27:03.654709] ftl_layout.c: 692:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache chunk count 5 00:28:53.250 [2024-12-06 18:27:03.654721] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:53.250 [2024-12-06 18:27:03.654742] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize layout 00:28:53.250 [2024-12-06 18:27:03.654756] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.260 ms 00:28:53.250 [2024-12-06 18:27:03.654766] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:53.250 [2024-12-06 18:27:03.654843] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:53.250 [2024-12-06 18:27:03.654858] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Verify layout 00:28:53.250 [2024-12-06 18:27:03.654871] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.057 ms 00:28:53.250 [2024-12-06 18:27:03.654880] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:53.250 [2024-12-06 18:27:03.654977] ftl_layout.c: 768:ftl_layout_dump: *NOTICE*: [FTL][ftl0] NV cache layout: 00:28:53.250 [2024-12-06 18:27:03.654991] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb 00:28:53.250 [2024-12-06 18:27:03.655004] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:28:53.250 [2024-12-06 18:27:03.655014] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:28:53.250 [2024-12-06 18:27:03.655028] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region l2p 00:28:53.250 [2024-12-06 18:27:03.655037] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.12 MiB 00:28:53.250 [2024-12-06 18:27:03.655049] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 80.00 MiB 00:28:53.250 [2024-12-06 18:27:03.655059] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md 00:28:53.250 [2024-12-06 18:27:03.655070] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.12 MiB 00:28:53.250 [2024-12-06 18:27:03.655080] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:28:53.250 [2024-12-06 18:27:03.655094] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md_mirror 00:28:53.250 [2024-12-06 18:27:03.655104] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.62 MiB 00:28:53.250 [2024-12-06 18:27:03.655116] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:28:53.250 [2024-12-06 18:27:03.655125] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md 00:28:53.250 [2024-12-06 18:27:03.655137] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.88 MiB 00:28:53.250 [2024-12-06 18:27:03.655146] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:28:53.250 [2024-12-06 18:27:03.655161] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md_mirror 00:28:53.250 [2024-12-06 18:27:03.655170] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 114.00 MiB 00:28:53.250 [2024-12-06 18:27:03.655182] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:28:53.250 [2024-12-06 18:27:03.655191] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l0 00:28:53.250 [2024-12-06 18:27:03.655203] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 81.12 MiB 00:28:53.250 [2024-12-06 18:27:03.655212] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:28:53.250 [2024-12-06 18:27:03.655224] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l1 00:28:53.250 [2024-12-06 18:27:03.655233] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 89.12 MiB 00:28:53.251 [2024-12-06 18:27:03.655245] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:28:53.251 [2024-12-06 18:27:03.655254] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l2 00:28:53.251 [2024-12-06 18:27:03.655277] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 97.12 MiB 00:28:53.251 [2024-12-06 18:27:03.655287] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:28:53.251 [2024-12-06 18:27:03.655299] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l3 00:28:53.251 [2024-12-06 18:27:03.655309] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 105.12 MiB 00:28:53.251 [2024-12-06 18:27:03.655321] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:28:53.251 [2024-12-06 18:27:03.655330] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md 00:28:53.251 [2024-12-06 18:27:03.655344] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.12 MiB 00:28:53.251 [2024-12-06 18:27:03.655354] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:28:53.251 [2024-12-06 18:27:03.655365] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md_mirror 00:28:53.251 [2024-12-06 18:27:03.655375] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.38 MiB 00:28:53.251 [2024-12-06 18:27:03.655388] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:28:53.251 [2024-12-06 18:27:03.655398] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log 00:28:53.251 [2024-12-06 18:27:03.655410] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.62 MiB 00:28:53.251 [2024-12-06 18:27:03.655419] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:28:53.251 [2024-12-06 18:27:03.655431] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log_mirror 00:28:53.251 [2024-12-06 18:27:03.655440] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.75 MiB 00:28:53.251 [2024-12-06 18:27:03.655451] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:28:53.251 [2024-12-06 18:27:03.655460] ftl_layout.c: 775:ftl_layout_dump: *NOTICE*: [FTL][ftl0] Base device layout: 00:28:53.251 [2024-12-06 18:27:03.655473] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb_mirror 00:28:53.251 [2024-12-06 18:27:03.655483] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:28:53.251 [2024-12-06 18:27:03.655496] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:28:53.251 [2024-12-06 18:27:03.655506] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region vmap 00:28:53.251 [2024-12-06 18:27:03.655520] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 102400.25 MiB 00:28:53.251 [2024-12-06 18:27:03.655530] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 3.38 MiB 00:28:53.251 [2024-12-06 18:27:03.655542] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region data_btm 00:28:53.251 [2024-12-06 18:27:03.655551] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.25 MiB 00:28:53.251 [2024-12-06 18:27:03.655563] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 102400.00 MiB 00:28:53.251 [2024-12-06 18:27:03.655573] upgrade/ftl_sb_v5.c: 408:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - nvc: 00:28:53.251 [2024-12-06 18:27:03.655591] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x0 ver:5 blk_offs:0x0 blk_sz:0x20 00:28:53.251 [2024-12-06 18:27:03.655603] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x2 ver:0 blk_offs:0x20 blk_sz:0x5000 00:28:53.251 [2024-12-06 18:27:03.655616] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x3 ver:2 blk_offs:0x5020 blk_sz:0x80 00:28:53.251 [2024-12-06 18:27:03.655627] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x4 ver:2 blk_offs:0x50a0 blk_sz:0x80 00:28:53.251 [2024-12-06 18:27:03.655640] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xa ver:2 blk_offs:0x5120 blk_sz:0x800 00:28:53.251 [2024-12-06 18:27:03.655650] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xb ver:2 blk_offs:0x5920 blk_sz:0x800 00:28:53.251 [2024-12-06 18:27:03.655663] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xc ver:2 blk_offs:0x6120 blk_sz:0x800 00:28:53.251 [2024-12-06 18:27:03.655674] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xd ver:2 blk_offs:0x6920 blk_sz:0x800 00:28:53.251 [2024-12-06 18:27:03.655689] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xe ver:0 blk_offs:0x7120 blk_sz:0x40 00:28:53.251 [2024-12-06 18:27:03.655699] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xf ver:0 blk_offs:0x7160 blk_sz:0x40 00:28:53.251 [2024-12-06 18:27:03.655714] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x10 ver:1 blk_offs:0x71a0 blk_sz:0x20 00:28:53.251 [2024-12-06 18:27:03.655724] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x11 ver:1 blk_offs:0x71c0 blk_sz:0x20 00:28:53.251 [2024-12-06 18:27:03.655737] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x6 ver:2 blk_offs:0x71e0 blk_sz:0x20 00:28:53.251 [2024-12-06 18:27:03.655747] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x7 ver:2 blk_offs:0x7200 blk_sz:0x20 00:28:53.251 [2024-12-06 18:27:03.655760] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x7220 blk_sz:0x13c0e0 00:28:53.251 [2024-12-06 18:27:03.655770] upgrade/ftl_sb_v5.c: 422:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - base dev: 00:28:53.251 [2024-12-06 18:27:03.655785] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x1 ver:5 blk_offs:0x0 blk_sz:0x20 00:28:53.251 [2024-12-06 18:27:03.655796] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x20 blk_sz:0x20 00:28:53.251 [2024-12-06 18:27:03.655810] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x9 ver:0 blk_offs:0x40 blk_sz:0x1900000 00:28:53.251 [2024-12-06 18:27:03.655820] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x5 ver:0 blk_offs:0x1900040 blk_sz:0x360 00:28:53.251 [2024-12-06 18:27:03.655833] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x19003a0 blk_sz:0x3fc60 00:28:53.251 [2024-12-06 18:27:03.655843] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:53.251 [2024-12-06 18:27:03.655856] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Layout upgrade 00:28:53.251 [2024-12-06 18:27:03.655867] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.926 ms 00:28:53.251 [2024-12-06 18:27:03.655879] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:53.251 [2024-12-06 18:27:03.655921] mngt/ftl_mngt_misc.c: 165:ftl_mngt_scrub_nv_cache: *NOTICE*: [FTL][ftl0] NV cache data region needs scrubbing, this may take a while. 00:28:53.251 [2024-12-06 18:27:03.655939] mngt/ftl_mngt_misc.c: 166:ftl_mngt_scrub_nv_cache: *NOTICE*: [FTL][ftl0] Scrubbing 5 chunks 00:28:56.538 [2024-12-06 18:27:06.994068] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:56.538 [2024-12-06 18:27:06.994278] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Scrub NV cache 00:28:56.538 [2024-12-06 18:27:06.994306] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3343.565 ms 00:28:56.538 [2024-12-06 18:27:06.994321] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:56.538 [2024-12-06 18:27:07.030715] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:56.538 [2024-12-06 18:27:07.030768] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:28:56.538 [2024-12-06 18:27:07.030784] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 36.131 ms 00:28:56.538 [2024-12-06 18:27:07.030799] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:56.538 [2024-12-06 18:27:07.030942] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:56.538 [2024-12-06 18:27:07.030958] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize band addresses 00:28:56.538 [2024-12-06 18:27:07.030970] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.056 ms 00:28:56.538 [2024-12-06 18:27:07.030990] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:56.538 [2024-12-06 18:27:07.073220] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:56.538 [2024-12-06 18:27:07.073413] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:28:56.538 [2024-12-06 18:27:07.073439] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 42.240 ms 00:28:56.538 [2024-12-06 18:27:07.073452] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:56.538 [2024-12-06 18:27:07.073500] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:56.538 [2024-12-06 18:27:07.073521] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:28:56.538 [2024-12-06 18:27:07.073532] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.003 ms 00:28:56.538 [2024-12-06 18:27:07.073555] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:56.538 [2024-12-06 18:27:07.074029] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:56.538 [2024-12-06 18:27:07.074047] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:28:56.538 [2024-12-06 18:27:07.074057] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.413 ms 00:28:56.538 [2024-12-06 18:27:07.074070] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:56.538 [2024-12-06 18:27:07.074169] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:56.538 [2024-12-06 18:27:07.074183] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:28:56.538 [2024-12-06 18:27:07.074197] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.079 ms 00:28:56.538 [2024-12-06 18:27:07.074212] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:56.538 [2024-12-06 18:27:07.093735] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:56.538 [2024-12-06 18:27:07.093790] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:28:56.538 [2024-12-06 18:27:07.093807] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 19.534 ms 00:28:56.538 [2024-12-06 18:27:07.093820] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:56.797 [2024-12-06 18:27:07.116480] ftl_l2p_cache.c: 458:ftl_l2p_cache_init: *NOTICE*: l2p maximum resident size is: 9 (of 10) MiB 00:28:56.797 [2024-12-06 18:27:07.119976] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:56.797 [2024-12-06 18:27:07.120149] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize L2P 00:28:56.797 [2024-12-06 18:27:07.120183] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 26.084 ms 00:28:56.797 [2024-12-06 18:27:07.120207] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:56.797 [2024-12-06 18:27:07.207554] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:56.797 [2024-12-06 18:27:07.207619] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Clear L2P 00:28:56.797 [2024-12-06 18:27:07.207639] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 87.433 ms 00:28:56.797 [2024-12-06 18:27:07.207651] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:56.797 [2024-12-06 18:27:07.207843] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:56.797 [2024-12-06 18:27:07.207860] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize band initialization 00:28:56.797 [2024-12-06 18:27:07.207877] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.141 ms 00:28:56.797 [2024-12-06 18:27:07.207888] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:56.797 [2024-12-06 18:27:07.245866] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:56.797 [2024-12-06 18:27:07.246025] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Save initial band info metadata 00:28:56.797 [2024-12-06 18:27:07.246053] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 37.983 ms 00:28:56.797 [2024-12-06 18:27:07.246065] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:56.797 [2024-12-06 18:27:07.282530] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:56.797 [2024-12-06 18:27:07.282589] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Save initial chunk info metadata 00:28:56.797 [2024-12-06 18:27:07.282609] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 36.475 ms 00:28:56.797 [2024-12-06 18:27:07.282620] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:56.797 [2024-12-06 18:27:07.283387] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:56.798 [2024-12-06 18:27:07.283408] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize P2L checkpointing 00:28:56.798 [2024-12-06 18:27:07.283423] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.720 ms 00:28:56.798 [2024-12-06 18:27:07.283435] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:57.056 [2024-12-06 18:27:07.384467] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:57.056 [2024-12-06 18:27:07.384739] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Wipe P2L region 00:28:57.056 [2024-12-06 18:27:07.384774] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 101.130 ms 00:28:57.056 [2024-12-06 18:27:07.384785] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:57.056 [2024-12-06 18:27:07.422906] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:57.056 [2024-12-06 18:27:07.422950] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Clear trim map 00:28:57.056 [2024-12-06 18:27:07.422969] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 38.056 ms 00:28:57.056 [2024-12-06 18:27:07.422980] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:57.056 [2024-12-06 18:27:07.459536] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:57.056 [2024-12-06 18:27:07.459579] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Clear trim log 00:28:57.056 [2024-12-06 18:27:07.459596] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 36.569 ms 00:28:57.056 [2024-12-06 18:27:07.459606] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:57.056 [2024-12-06 18:27:07.496638] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:57.056 [2024-12-06 18:27:07.496681] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL dirty state 00:28:57.056 [2024-12-06 18:27:07.496698] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 37.048 ms 00:28:57.056 [2024-12-06 18:27:07.496710] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:57.056 [2024-12-06 18:27:07.496759] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:57.056 [2024-12-06 18:27:07.496771] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Start core poller 00:28:57.056 [2024-12-06 18:27:07.496788] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.006 ms 00:28:57.057 [2024-12-06 18:27:07.496798] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:57.057 [2024-12-06 18:27:07.496900] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:57.057 [2024-12-06 18:27:07.496917] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize initialization 00:28:57.057 [2024-12-06 18:27:07.496930] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.032 ms 00:28:57.057 [2024-12-06 18:27:07.496940] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:57.057 [2024-12-06 18:27:07.497937] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL startup', duration = 3865.120 ms, result 0 00:28:57.057 { 00:28:57.057 "name": "ftl0", 00:28:57.057 "uuid": "0fd165a4-a7ae-4b12-8280-ae23d7c38836" 00:28:57.057 } 00:28:57.057 18:27:07 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@64 -- # echo '{"subsystems": [' 00:28:57.057 18:27:07 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@65 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py save_subsystem_config -n bdev 00:28:57.315 18:27:07 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@66 -- # echo ']}' 00:28:57.315 18:27:07 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@70 -- # modprobe nbd 00:28:57.315 18:27:07 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@71 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nbd_start_disk ftl0 /dev/nbd0 00:28:57.574 /dev/nbd0 00:28:57.574 18:27:07 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@72 -- # waitfornbd nbd0 00:28:57.574 18:27:07 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@872 -- # local nbd_name=nbd0 00:28:57.574 18:27:07 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@873 -- # local i 00:28:57.574 18:27:07 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:28:57.574 18:27:07 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:28:57.574 18:27:07 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@876 -- # grep -q -w nbd0 /proc/partitions 00:28:57.574 18:27:07 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@877 -- # break 00:28:57.574 18:27:07 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:28:57.574 18:27:07 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:28:57.574 18:27:07 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@889 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/ftl/nbdtest bs=4096 count=1 iflag=direct 00:28:57.574 1+0 records in 00:28:57.574 1+0 records out 00:28:57.574 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000386982 s, 10.6 MB/s 00:28:57.574 18:27:07 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/ftl/nbdtest 00:28:57.574 18:27:07 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@890 -- # size=4096 00:28:57.574 18:27:07 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/ftl/nbdtest 00:28:57.574 18:27:07 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:28:57.574 18:27:07 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@893 -- # return 0 00:28:57.574 18:27:07 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@75 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd -m 0x2 --if=/dev/urandom --of=/home/vagrant/spdk_repo/spdk/test/ftl/testfile --bs=4096 --count=262144 00:28:57.574 [2024-12-06 18:27:08.076084] Starting SPDK v25.01-pre git sha1 1148849d6 / DPDK 24.03.0 initialization... 00:28:57.574 [2024-12-06 18:27:08.076206] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid81124 ] 00:28:57.833 [2024-12-06 18:27:08.258279] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:28:58.092 [2024-12-06 18:27:08.413960] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:28:59.471  [2024-12-06T18:27:10.986Z] Copying: 188/1024 [MB] (188 MBps) [2024-12-06T18:27:11.923Z] Copying: 389/1024 [MB] (201 MBps) [2024-12-06T18:27:12.879Z] Copying: 591/1024 [MB] (201 MBps) [2024-12-06T18:27:13.849Z] Copying: 791/1024 [MB] (200 MBps) [2024-12-06T18:27:14.107Z] Copying: 985/1024 [MB] (193 MBps) [2024-12-06T18:27:15.480Z] Copying: 1024/1024 [MB] (average 197 MBps) 00:29:04.904 00:29:04.904 18:27:15 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@76 -- # md5sum /home/vagrant/spdk_repo/spdk/test/ftl/testfile 00:29:06.276 18:27:16 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@77 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd -m 0x2 --if=/home/vagrant/spdk_repo/spdk/test/ftl/testfile --of=/dev/nbd0 --bs=4096 --count=262144 --oflag=direct 00:29:06.532 [2024-12-06 18:27:16.925915] Starting SPDK v25.01-pre git sha1 1148849d6 / DPDK 24.03.0 initialization... 00:29:06.532 [2024-12-06 18:27:16.926039] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid81226 ] 00:29:06.790 [2024-12-06 18:27:17.108992] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:29:06.790 [2024-12-06 18:27:17.224698] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:29:08.165  [2024-12-06T18:27:19.676Z] Copying: 19/1024 [MB] (19 MBps) [2024-12-06T18:27:20.688Z] Copying: 36/1024 [MB] (17 MBps) [2024-12-06T18:27:21.625Z] Copying: 54/1024 [MB] (17 MBps) [2024-12-06T18:27:22.561Z] Copying: 71/1024 [MB] (17 MBps) [2024-12-06T18:27:23.935Z] Copying: 89/1024 [MB] (18 MBps) [2024-12-06T18:27:24.869Z] Copying: 108/1024 [MB] (18 MBps) [2024-12-06T18:27:25.803Z] Copying: 126/1024 [MB] (18 MBps) [2024-12-06T18:27:26.739Z] Copying: 143/1024 [MB] (17 MBps) [2024-12-06T18:27:27.700Z] Copying: 160/1024 [MB] (16 MBps) [2024-12-06T18:27:28.636Z] Copying: 177/1024 [MB] (16 MBps) [2024-12-06T18:27:29.572Z] Copying: 195/1024 [MB] (17 MBps) [2024-12-06T18:27:30.950Z] Copying: 212/1024 [MB] (16 MBps) [2024-12-06T18:27:31.886Z] Copying: 229/1024 [MB] (17 MBps) [2024-12-06T18:27:32.822Z] Copying: 246/1024 [MB] (17 MBps) [2024-12-06T18:27:33.758Z] Copying: 265/1024 [MB] (18 MBps) [2024-12-06T18:27:34.698Z] Copying: 283/1024 [MB] (18 MBps) [2024-12-06T18:27:35.631Z] Copying: 301/1024 [MB] (18 MBps) [2024-12-06T18:27:36.563Z] Copying: 320/1024 [MB] (18 MBps) [2024-12-06T18:27:37.936Z] Copying: 338/1024 [MB] (17 MBps) [2024-12-06T18:27:38.868Z] Copying: 356/1024 [MB] (18 MBps) [2024-12-06T18:27:39.801Z] Copying: 374/1024 [MB] (17 MBps) [2024-12-06T18:27:40.737Z] Copying: 392/1024 [MB] (17 MBps) [2024-12-06T18:27:41.693Z] Copying: 410/1024 [MB] (18 MBps) [2024-12-06T18:27:42.629Z] Copying: 428/1024 [MB] (18 MBps) [2024-12-06T18:27:43.569Z] Copying: 447/1024 [MB] (19 MBps) [2024-12-06T18:27:44.949Z] Copying: 465/1024 [MB] (18 MBps) [2024-12-06T18:27:45.518Z] Copying: 484/1024 [MB] (18 MBps) [2024-12-06T18:27:46.893Z] Copying: 502/1024 [MB] (18 MBps) [2024-12-06T18:27:47.849Z] Copying: 520/1024 [MB] (18 MBps) [2024-12-06T18:27:48.792Z] Copying: 539/1024 [MB] (18 MBps) [2024-12-06T18:27:49.727Z] Copying: 557/1024 [MB] (18 MBps) [2024-12-06T18:27:50.665Z] Copying: 576/1024 [MB] (18 MBps) [2024-12-06T18:27:51.603Z] Copying: 595/1024 [MB] (18 MBps) [2024-12-06T18:27:52.541Z] Copying: 613/1024 [MB] (18 MBps) [2024-12-06T18:27:53.921Z] Copying: 632/1024 [MB] (18 MBps) [2024-12-06T18:27:54.551Z] Copying: 651/1024 [MB] (18 MBps) [2024-12-06T18:27:55.927Z] Copying: 669/1024 [MB] (18 MBps) [2024-12-06T18:27:56.493Z] Copying: 688/1024 [MB] (18 MBps) [2024-12-06T18:27:57.867Z] Copying: 707/1024 [MB] (18 MBps) [2024-12-06T18:27:58.800Z] Copying: 726/1024 [MB] (18 MBps) [2024-12-06T18:27:59.736Z] Copying: 744/1024 [MB] (18 MBps) [2024-12-06T18:28:00.675Z] Copying: 763/1024 [MB] (18 MBps) [2024-12-06T18:28:01.613Z] Copying: 781/1024 [MB] (18 MBps) [2024-12-06T18:28:02.551Z] Copying: 800/1024 [MB] (18 MBps) [2024-12-06T18:28:03.490Z] Copying: 818/1024 [MB] (18 MBps) [2024-12-06T18:28:04.871Z] Copying: 836/1024 [MB] (18 MBps) [2024-12-06T18:28:05.821Z] Copying: 855/1024 [MB] (18 MBps) [2024-12-06T18:28:06.757Z] Copying: 874/1024 [MB] (18 MBps) [2024-12-06T18:28:07.693Z] Copying: 892/1024 [MB] (18 MBps) [2024-12-06T18:28:08.633Z] Copying: 910/1024 [MB] (18 MBps) [2024-12-06T18:28:09.573Z] Copying: 929/1024 [MB] (18 MBps) [2024-12-06T18:28:10.511Z] Copying: 947/1024 [MB] (18 MBps) [2024-12-06T18:28:11.891Z] Copying: 966/1024 [MB] (18 MBps) [2024-12-06T18:28:12.831Z] Copying: 985/1024 [MB] (19 MBps) [2024-12-06T18:28:13.768Z] Copying: 1004/1024 [MB] (19 MBps) [2024-12-06T18:28:13.768Z] Copying: 1023/1024 [MB] (18 MBps) [2024-12-06T18:28:14.705Z] Copying: 1024/1024 [MB] (average 18 MBps) 00:30:04.129 00:30:04.129 18:28:14 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@78 -- # sync /dev/nbd0 00:30:04.129 18:28:14 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@79 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nbd_stop_disk /dev/nbd0 00:30:04.388 18:28:14 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@80 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_ftl_unload -b ftl0 00:30:04.647 [2024-12-06 18:28:15.072956] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:04.647 [2024-12-06 18:28:15.073222] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinit core IO channel 00:30:04.647 [2024-12-06 18:28:15.073336] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.004 ms 00:30:04.647 [2024-12-06 18:28:15.073381] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:04.647 [2024-12-06 18:28:15.073461] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:30:04.647 [2024-12-06 18:28:15.077806] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:04.647 [2024-12-06 18:28:15.077949] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Unregister IO device 00:30:04.647 [2024-12-06 18:28:15.077976] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.289 ms 00:30:04.647 [2024-12-06 18:28:15.077988] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:04.647 [2024-12-06 18:28:15.080125] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:04.648 [2024-12-06 18:28:15.080166] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Stop core poller 00:30:04.648 [2024-12-06 18:28:15.080183] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.097 ms 00:30:04.648 [2024-12-06 18:28:15.080193] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:04.648 [2024-12-06 18:28:15.098093] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:04.648 [2024-12-06 18:28:15.098133] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist L2P 00:30:04.648 [2024-12-06 18:28:15.098151] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 17.899 ms 00:30:04.648 [2024-12-06 18:28:15.098162] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:04.648 [2024-12-06 18:28:15.103261] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:04.648 [2024-12-06 18:28:15.103304] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finish L2P trims 00:30:04.648 [2024-12-06 18:28:15.103319] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 5.065 ms 00:30:04.648 [2024-12-06 18:28:15.103330] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:04.648 [2024-12-06 18:28:15.140770] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:04.648 [2024-12-06 18:28:15.140847] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist NV cache metadata 00:30:04.648 [2024-12-06 18:28:15.140868] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 37.411 ms 00:30:04.648 [2024-12-06 18:28:15.140878] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:04.648 [2024-12-06 18:28:15.164262] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:04.648 [2024-12-06 18:28:15.164503] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist valid map metadata 00:30:04.648 [2024-12-06 18:28:15.164596] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 23.340 ms 00:30:04.648 [2024-12-06 18:28:15.164634] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:04.648 [2024-12-06 18:28:15.164896] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:04.648 [2024-12-06 18:28:15.165047] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist P2L metadata 00:30:04.648 [2024-12-06 18:28:15.165069] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.178 ms 00:30:04.648 [2024-12-06 18:28:15.165080] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:04.648 [2024-12-06 18:28:15.203784] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:04.648 [2024-12-06 18:28:15.204026] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist band info metadata 00:30:04.648 [2024-12-06 18:28:15.204059] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 38.732 ms 00:30:04.648 [2024-12-06 18:28:15.204071] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:04.908 [2024-12-06 18:28:15.241746] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:04.908 [2024-12-06 18:28:15.241818] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist trim metadata 00:30:04.908 [2024-12-06 18:28:15.241838] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 37.669 ms 00:30:04.908 [2024-12-06 18:28:15.241848] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:04.908 [2024-12-06 18:28:15.280677] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:04.908 [2024-12-06 18:28:15.280746] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist superblock 00:30:04.908 [2024-12-06 18:28:15.280765] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 38.807 ms 00:30:04.908 [2024-12-06 18:28:15.280776] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:04.908 [2024-12-06 18:28:15.318547] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:04.908 [2024-12-06 18:28:15.318608] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL clean state 00:30:04.908 [2024-12-06 18:28:15.318627] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 37.665 ms 00:30:04.908 [2024-12-06 18:28:15.318638] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:04.908 [2024-12-06 18:28:15.318696] ftl_debug.c: 165:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Bands validity: 00:30:04.908 [2024-12-06 18:28:15.318713] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 1: 0 / 261120 wr_cnt: 0 state: free 00:30:04.908 [2024-12-06 18:28:15.318729] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 2: 0 / 261120 wr_cnt: 0 state: free 00:30:04.908 [2024-12-06 18:28:15.318741] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 3: 0 / 261120 wr_cnt: 0 state: free 00:30:04.908 [2024-12-06 18:28:15.318754] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 4: 0 / 261120 wr_cnt: 0 state: free 00:30:04.908 [2024-12-06 18:28:15.318765] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 5: 0 / 261120 wr_cnt: 0 state: free 00:30:04.908 [2024-12-06 18:28:15.318778] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 6: 0 / 261120 wr_cnt: 0 state: free 00:30:04.908 [2024-12-06 18:28:15.318790] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 7: 0 / 261120 wr_cnt: 0 state: free 00:30:04.908 [2024-12-06 18:28:15.318807] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 8: 0 / 261120 wr_cnt: 0 state: free 00:30:04.908 [2024-12-06 18:28:15.318818] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 9: 0 / 261120 wr_cnt: 0 state: free 00:30:04.909 [2024-12-06 18:28:15.318837] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 10: 0 / 261120 wr_cnt: 0 state: free 00:30:04.909 [2024-12-06 18:28:15.318848] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 11: 0 / 261120 wr_cnt: 0 state: free 00:30:04.909 [2024-12-06 18:28:15.318861] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 12: 0 / 261120 wr_cnt: 0 state: free 00:30:04.909 [2024-12-06 18:28:15.318873] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 13: 0 / 261120 wr_cnt: 0 state: free 00:30:04.909 [2024-12-06 18:28:15.318886] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 14: 0 / 261120 wr_cnt: 0 state: free 00:30:04.909 [2024-12-06 18:28:15.318897] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 15: 0 / 261120 wr_cnt: 0 state: free 00:30:04.909 [2024-12-06 18:28:15.318910] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 16: 0 / 261120 wr_cnt: 0 state: free 00:30:04.909 [2024-12-06 18:28:15.318921] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 17: 0 / 261120 wr_cnt: 0 state: free 00:30:04.909 [2024-12-06 18:28:15.318934] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 18: 0 / 261120 wr_cnt: 0 state: free 00:30:04.909 [2024-12-06 18:28:15.318945] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 19: 0 / 261120 wr_cnt: 0 state: free 00:30:04.909 [2024-12-06 18:28:15.318959] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 20: 0 / 261120 wr_cnt: 0 state: free 00:30:04.909 [2024-12-06 18:28:15.318969] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 21: 0 / 261120 wr_cnt: 0 state: free 00:30:04.909 [2024-12-06 18:28:15.318985] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 22: 0 / 261120 wr_cnt: 0 state: free 00:30:04.909 [2024-12-06 18:28:15.318996] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 23: 0 / 261120 wr_cnt: 0 state: free 00:30:04.909 [2024-12-06 18:28:15.319011] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 24: 0 / 261120 wr_cnt: 0 state: free 00:30:04.909 [2024-12-06 18:28:15.319023] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 25: 0 / 261120 wr_cnt: 0 state: free 00:30:04.909 [2024-12-06 18:28:15.319036] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 26: 0 / 261120 wr_cnt: 0 state: free 00:30:04.909 [2024-12-06 18:28:15.319048] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 27: 0 / 261120 wr_cnt: 0 state: free 00:30:04.909 [2024-12-06 18:28:15.319062] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 28: 0 / 261120 wr_cnt: 0 state: free 00:30:04.909 [2024-12-06 18:28:15.319073] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 29: 0 / 261120 wr_cnt: 0 state: free 00:30:04.909 [2024-12-06 18:28:15.319092] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 30: 0 / 261120 wr_cnt: 0 state: free 00:30:04.909 [2024-12-06 18:28:15.319103] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 31: 0 / 261120 wr_cnt: 0 state: free 00:30:04.909 [2024-12-06 18:28:15.319116] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 32: 0 / 261120 wr_cnt: 0 state: free 00:30:04.909 [2024-12-06 18:28:15.319127] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 33: 0 / 261120 wr_cnt: 0 state: free 00:30:04.909 [2024-12-06 18:28:15.319140] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 34: 0 / 261120 wr_cnt: 0 state: free 00:30:04.909 [2024-12-06 18:28:15.319151] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 35: 0 / 261120 wr_cnt: 0 state: free 00:30:04.909 [2024-12-06 18:28:15.319164] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 36: 0 / 261120 wr_cnt: 0 state: free 00:30:04.909 [2024-12-06 18:28:15.319175] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 37: 0 / 261120 wr_cnt: 0 state: free 00:30:04.909 [2024-12-06 18:28:15.319188] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 38: 0 / 261120 wr_cnt: 0 state: free 00:30:04.909 [2024-12-06 18:28:15.319200] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 39: 0 / 261120 wr_cnt: 0 state: free 00:30:04.909 [2024-12-06 18:28:15.319215] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 40: 0 / 261120 wr_cnt: 0 state: free 00:30:04.909 [2024-12-06 18:28:15.319226] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 41: 0 / 261120 wr_cnt: 0 state: free 00:30:04.909 [2024-12-06 18:28:15.319238] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 42: 0 / 261120 wr_cnt: 0 state: free 00:30:04.909 [2024-12-06 18:28:15.319249] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 43: 0 / 261120 wr_cnt: 0 state: free 00:30:04.909 [2024-12-06 18:28:15.319262] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 44: 0 / 261120 wr_cnt: 0 state: free 00:30:04.909 [2024-12-06 18:28:15.319289] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 45: 0 / 261120 wr_cnt: 0 state: free 00:30:04.909 [2024-12-06 18:28:15.319302] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 46: 0 / 261120 wr_cnt: 0 state: free 00:30:04.909 [2024-12-06 18:28:15.319326] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 47: 0 / 261120 wr_cnt: 0 state: free 00:30:04.909 [2024-12-06 18:28:15.319340] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 48: 0 / 261120 wr_cnt: 0 state: free 00:30:04.909 [2024-12-06 18:28:15.319351] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 49: 0 / 261120 wr_cnt: 0 state: free 00:30:04.909 [2024-12-06 18:28:15.319364] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 50: 0 / 261120 wr_cnt: 0 state: free 00:30:04.909 [2024-12-06 18:28:15.319375] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 51: 0 / 261120 wr_cnt: 0 state: free 00:30:04.909 [2024-12-06 18:28:15.319404] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 52: 0 / 261120 wr_cnt: 0 state: free 00:30:04.909 [2024-12-06 18:28:15.319416] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 53: 0 / 261120 wr_cnt: 0 state: free 00:30:04.909 [2024-12-06 18:28:15.319429] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 54: 0 / 261120 wr_cnt: 0 state: free 00:30:04.909 [2024-12-06 18:28:15.319440] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 55: 0 / 261120 wr_cnt: 0 state: free 00:30:04.909 [2024-12-06 18:28:15.319457] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 56: 0 / 261120 wr_cnt: 0 state: free 00:30:04.909 [2024-12-06 18:28:15.319468] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 57: 0 / 261120 wr_cnt: 0 state: free 00:30:04.909 [2024-12-06 18:28:15.319481] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 58: 0 / 261120 wr_cnt: 0 state: free 00:30:04.909 [2024-12-06 18:28:15.319492] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 59: 0 / 261120 wr_cnt: 0 state: free 00:30:04.909 [2024-12-06 18:28:15.319506] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 60: 0 / 261120 wr_cnt: 0 state: free 00:30:04.909 [2024-12-06 18:28:15.319516] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 61: 0 / 261120 wr_cnt: 0 state: free 00:30:04.909 [2024-12-06 18:28:15.319530] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 62: 0 / 261120 wr_cnt: 0 state: free 00:30:04.909 [2024-12-06 18:28:15.319542] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 63: 0 / 261120 wr_cnt: 0 state: free 00:30:04.909 [2024-12-06 18:28:15.319554] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 64: 0 / 261120 wr_cnt: 0 state: free 00:30:04.909 [2024-12-06 18:28:15.319569] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 65: 0 / 261120 wr_cnt: 0 state: free 00:30:04.909 [2024-12-06 18:28:15.319582] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 66: 0 / 261120 wr_cnt: 0 state: free 00:30:04.909 [2024-12-06 18:28:15.319593] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 67: 0 / 261120 wr_cnt: 0 state: free 00:30:04.909 [2024-12-06 18:28:15.319606] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 68: 0 / 261120 wr_cnt: 0 state: free 00:30:04.909 [2024-12-06 18:28:15.319617] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 69: 0 / 261120 wr_cnt: 0 state: free 00:30:04.909 [2024-12-06 18:28:15.319630] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 70: 0 / 261120 wr_cnt: 0 state: free 00:30:04.909 [2024-12-06 18:28:15.319641] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 71: 0 / 261120 wr_cnt: 0 state: free 00:30:04.909 [2024-12-06 18:28:15.319658] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 72: 0 / 261120 wr_cnt: 0 state: free 00:30:04.909 [2024-12-06 18:28:15.319668] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 73: 0 / 261120 wr_cnt: 0 state: free 00:30:04.909 [2024-12-06 18:28:15.319682] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 74: 0 / 261120 wr_cnt: 0 state: free 00:30:04.909 [2024-12-06 18:28:15.319693] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 75: 0 / 261120 wr_cnt: 0 state: free 00:30:04.909 [2024-12-06 18:28:15.319706] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 76: 0 / 261120 wr_cnt: 0 state: free 00:30:04.909 [2024-12-06 18:28:15.319716] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 77: 0 / 261120 wr_cnt: 0 state: free 00:30:04.909 [2024-12-06 18:28:15.319729] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 78: 0 / 261120 wr_cnt: 0 state: free 00:30:04.909 [2024-12-06 18:28:15.319740] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 79: 0 / 261120 wr_cnt: 0 state: free 00:30:04.909 [2024-12-06 18:28:15.319753] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 80: 0 / 261120 wr_cnt: 0 state: free 00:30:04.909 [2024-12-06 18:28:15.319764] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 81: 0 / 261120 wr_cnt: 0 state: free 00:30:04.909 [2024-12-06 18:28:15.319777] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 82: 0 / 261120 wr_cnt: 0 state: free 00:30:04.909 [2024-12-06 18:28:15.319787] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 83: 0 / 261120 wr_cnt: 0 state: free 00:30:04.909 [2024-12-06 18:28:15.319800] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 84: 0 / 261120 wr_cnt: 0 state: free 00:30:04.909 [2024-12-06 18:28:15.319811] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 85: 0 / 261120 wr_cnt: 0 state: free 00:30:04.909 [2024-12-06 18:28:15.319824] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 86: 0 / 261120 wr_cnt: 0 state: free 00:30:04.909 [2024-12-06 18:28:15.319835] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 87: 0 / 261120 wr_cnt: 0 state: free 00:30:04.909 [2024-12-06 18:28:15.319850] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 88: 0 / 261120 wr_cnt: 0 state: free 00:30:04.909 [2024-12-06 18:28:15.319861] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 89: 0 / 261120 wr_cnt: 0 state: free 00:30:04.909 [2024-12-06 18:28:15.319874] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 90: 0 / 261120 wr_cnt: 0 state: free 00:30:04.909 [2024-12-06 18:28:15.319884] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 91: 0 / 261120 wr_cnt: 0 state: free 00:30:04.909 [2024-12-06 18:28:15.319898] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 92: 0 / 261120 wr_cnt: 0 state: free 00:30:04.909 [2024-12-06 18:28:15.319908] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 93: 0 / 261120 wr_cnt: 0 state: free 00:30:04.909 [2024-12-06 18:28:15.319923] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 94: 0 / 261120 wr_cnt: 0 state: free 00:30:04.909 [2024-12-06 18:28:15.319933] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 95: 0 / 261120 wr_cnt: 0 state: free 00:30:04.909 [2024-12-06 18:28:15.319946] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 96: 0 / 261120 wr_cnt: 0 state: free 00:30:04.909 [2024-12-06 18:28:15.319957] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 97: 0 / 261120 wr_cnt: 0 state: free 00:30:04.909 [2024-12-06 18:28:15.319970] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 98: 0 / 261120 wr_cnt: 0 state: free 00:30:04.909 [2024-12-06 18:28:15.319980] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 99: 0 / 261120 wr_cnt: 0 state: free 00:30:04.909 [2024-12-06 18:28:15.319995] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 100: 0 / 261120 wr_cnt: 0 state: free 00:30:04.909 [2024-12-06 18:28:15.320013] ftl_debug.c: 211:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] 00:30:04.909 [2024-12-06 18:28:15.320025] ftl_debug.c: 212:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] device UUID: 0fd165a4-a7ae-4b12-8280-ae23d7c38836 00:30:04.909 [2024-12-06 18:28:15.320037] ftl_debug.c: 213:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total valid LBAs: 0 00:30:04.909 [2024-12-06 18:28:15.320053] ftl_debug.c: 214:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total writes: 960 00:30:04.909 [2024-12-06 18:28:15.320066] ftl_debug.c: 215:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] user writes: 0 00:30:04.909 [2024-12-06 18:28:15.320079] ftl_debug.c: 216:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] WAF: inf 00:30:04.909 [2024-12-06 18:28:15.320092] ftl_debug.c: 218:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] limits: 00:30:04.909 [2024-12-06 18:28:15.320104] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] crit: 0 00:30:04.909 [2024-12-06 18:28:15.320114] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] high: 0 00:30:04.909 [2024-12-06 18:28:15.320126] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] low: 0 00:30:04.909 [2024-12-06 18:28:15.320135] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] start: 0 00:30:04.909 [2024-12-06 18:28:15.320149] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:04.909 [2024-12-06 18:28:15.320161] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Dump statistics 00:30:04.909 [2024-12-06 18:28:15.320174] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.458 ms 00:30:04.909 [2024-12-06 18:28:15.320184] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:04.909 [2024-12-06 18:28:15.340627] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:04.909 [2024-12-06 18:28:15.340675] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize L2P 00:30:04.909 [2024-12-06 18:28:15.340692] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 20.406 ms 00:30:04.909 [2024-12-06 18:28:15.340703] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:04.909 [2024-12-06 18:28:15.341285] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:04.909 [2024-12-06 18:28:15.341297] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize P2L checkpointing 00:30:04.909 [2024-12-06 18:28:15.341327] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.528 ms 00:30:04.909 [2024-12-06 18:28:15.341337] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:04.909 [2024-12-06 18:28:15.406955] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:30:04.909 [2024-12-06 18:28:15.407155] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:30:04.909 [2024-12-06 18:28:15.407183] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:30:04.909 [2024-12-06 18:28:15.407194] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:04.909 [2024-12-06 18:28:15.407262] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:30:04.909 [2024-12-06 18:28:15.407290] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:30:04.909 [2024-12-06 18:28:15.407303] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:30:04.909 [2024-12-06 18:28:15.407313] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:04.909 [2024-12-06 18:28:15.407423] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:30:04.909 [2024-12-06 18:28:15.407440] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:30:04.909 [2024-12-06 18:28:15.407454] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:30:04.909 [2024-12-06 18:28:15.407464] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:04.909 [2024-12-06 18:28:15.407490] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:30:04.909 [2024-12-06 18:28:15.407501] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:30:04.909 [2024-12-06 18:28:15.407514] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:30:04.909 [2024-12-06 18:28:15.407524] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:05.167 [2024-12-06 18:28:15.529948] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:30:05.167 [2024-12-06 18:28:15.530003] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:30:05.167 [2024-12-06 18:28:15.530020] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:30:05.167 [2024-12-06 18:28:15.530030] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:05.167 [2024-12-06 18:28:15.632390] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:30:05.167 [2024-12-06 18:28:15.632452] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:30:05.167 [2024-12-06 18:28:15.632470] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:30:05.167 [2024-12-06 18:28:15.632481] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:05.168 [2024-12-06 18:28:15.632602] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:30:05.168 [2024-12-06 18:28:15.632616] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:30:05.168 [2024-12-06 18:28:15.632632] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:30:05.168 [2024-12-06 18:28:15.632643] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:05.168 [2024-12-06 18:28:15.632706] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:30:05.168 [2024-12-06 18:28:15.632717] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:30:05.168 [2024-12-06 18:28:15.632731] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:30:05.168 [2024-12-06 18:28:15.632740] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:05.168 [2024-12-06 18:28:15.632871] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:30:05.168 [2024-12-06 18:28:15.632885] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:30:05.168 [2024-12-06 18:28:15.632898] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:30:05.168 [2024-12-06 18:28:15.632912] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:05.168 [2024-12-06 18:28:15.632954] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:30:05.168 [2024-12-06 18:28:15.632966] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize superblock 00:30:05.168 [2024-12-06 18:28:15.632978] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:30:05.168 [2024-12-06 18:28:15.632989] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:05.168 [2024-12-06 18:28:15.633030] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:30:05.168 [2024-12-06 18:28:15.633042] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:30:05.168 [2024-12-06 18:28:15.633055] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:30:05.168 [2024-12-06 18:28:15.633068] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:05.168 [2024-12-06 18:28:15.633115] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:30:05.168 [2024-12-06 18:28:15.633127] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:30:05.168 [2024-12-06 18:28:15.633141] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:30:05.168 [2024-12-06 18:28:15.633152] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:05.168 [2024-12-06 18:28:15.633308] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL shutdown', duration = 561.205 ms, result 0 00:30:05.168 true 00:30:05.168 18:28:15 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@83 -- # kill -9 80976 00:30:05.168 18:28:15 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@84 -- # rm -f /dev/shm/spdk_tgt_trace.pid80976 00:30:05.168 18:28:15 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@87 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/dev/urandom --of=/home/vagrant/spdk_repo/spdk/test/ftl/testfile2 --bs=4096 --count=262144 00:30:05.426 [2024-12-06 18:28:15.760635] Starting SPDK v25.01-pre git sha1 1148849d6 / DPDK 24.03.0 initialization... 00:30:05.426 [2024-12-06 18:28:15.760754] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid81822 ] 00:30:05.426 [2024-12-06 18:28:15.939220] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:30:05.683 [2024-12-06 18:28:16.052610] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:30:07.063  [2024-12-06T18:28:18.613Z] Copying: 199/1024 [MB] (199 MBps) [2024-12-06T18:28:19.550Z] Copying: 402/1024 [MB] (202 MBps) [2024-12-06T18:28:20.486Z] Copying: 605/1024 [MB] (203 MBps) [2024-12-06T18:28:21.425Z] Copying: 809/1024 [MB] (203 MBps) [2024-12-06T18:28:21.685Z] Copying: 1006/1024 [MB] (197 MBps) [2024-12-06T18:28:22.624Z] Copying: 1024/1024 [MB] (average 201 MBps) 00:30:12.048 00:30:12.048 /home/vagrant/spdk_repo/spdk/test/ftl/dirty_shutdown.sh: line 87: 80976 Killed "$SPDK_BIN_DIR/spdk_tgt" -m 0x1 00:30:12.048 18:28:22 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@88 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/ftl/testfile2 --ob=ftl0 --count=262144 --seek=262144 --json=/home/vagrant/spdk_repo/spdk/test/ftl/config/ftl.json 00:30:12.307 [2024-12-06 18:28:22.700013] Starting SPDK v25.01-pre git sha1 1148849d6 / DPDK 24.03.0 initialization... 00:30:12.307 [2024-12-06 18:28:22.700154] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid81895 ] 00:30:12.307 [2024-12-06 18:28:22.882014] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:30:12.567 [2024-12-06 18:28:23.000200] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:30:12.826 [2024-12-06 18:28:23.376862] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:30:12.826 [2024-12-06 18:28:23.376933] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:30:13.086 [2024-12-06 18:28:23.443100] blobstore.c:4899:bs_recover: *NOTICE*: Performing recovery on blobstore 00:30:13.086 [2024-12-06 18:28:23.443419] blobstore.c:4846:bs_load_replay_md_cpl: *NOTICE*: Recover: blob 0x0 00:30:13.086 [2024-12-06 18:28:23.443622] blobstore.c:4846:bs_load_replay_md_cpl: *NOTICE*: Recover: blob 0x1 00:30:13.417 [2024-12-06 18:28:23.700232] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:13.417 [2024-12-06 18:28:23.700307] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Check configuration 00:30:13.417 [2024-12-06 18:28:23.700323] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.004 ms 00:30:13.417 [2024-12-06 18:28:23.700337] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:13.417 [2024-12-06 18:28:23.700405] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:13.417 [2024-12-06 18:28:23.700417] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:30:13.417 [2024-12-06 18:28:23.700428] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.031 ms 00:30:13.417 [2024-12-06 18:28:23.700438] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:13.417 [2024-12-06 18:28:23.700460] mngt/ftl_mngt_bdev.c: 196:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using nvc0n1p0 as write buffer cache 00:30:13.417 [2024-12-06 18:28:23.701405] mngt/ftl_mngt_bdev.c: 236:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using bdev as NV Cache device 00:30:13.417 [2024-12-06 18:28:23.701433] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:13.417 [2024-12-06 18:28:23.701444] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:30:13.417 [2024-12-06 18:28:23.701456] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.979 ms 00:30:13.417 [2024-12-06 18:28:23.701465] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:13.417 [2024-12-06 18:28:23.702912] mngt/ftl_mngt_md.c: 455:ftl_mngt_load_sb: *NOTICE*: [FTL][ftl0] SHM: clean 0, shm_clean 0 00:30:13.417 [2024-12-06 18:28:23.722264] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:13.417 [2024-12-06 18:28:23.722314] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Load super block 00:30:13.418 [2024-12-06 18:28:23.722329] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 19.384 ms 00:30:13.418 [2024-12-06 18:28:23.722340] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:13.418 [2024-12-06 18:28:23.722430] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:13.418 [2024-12-06 18:28:23.722443] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Validate super block 00:30:13.418 [2024-12-06 18:28:23.722455] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.045 ms 00:30:13.418 [2024-12-06 18:28:23.722465] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:13.418 [2024-12-06 18:28:23.729227] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:13.418 [2024-12-06 18:28:23.729259] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:30:13.418 [2024-12-06 18:28:23.729299] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 6.698 ms 00:30:13.418 [2024-12-06 18:28:23.729309] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:13.418 [2024-12-06 18:28:23.729398] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:13.418 [2024-12-06 18:28:23.729412] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:30:13.418 [2024-12-06 18:28:23.729424] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.069 ms 00:30:13.418 [2024-12-06 18:28:23.729434] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:13.418 [2024-12-06 18:28:23.729478] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:13.418 [2024-12-06 18:28:23.729490] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Register IO device 00:30:13.418 [2024-12-06 18:28:23.729501] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.008 ms 00:30:13.418 [2024-12-06 18:28:23.729511] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:13.418 [2024-12-06 18:28:23.729535] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:30:13.418 [2024-12-06 18:28:23.734613] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:13.418 [2024-12-06 18:28:23.734755] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:30:13.418 [2024-12-06 18:28:23.734888] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 5.092 ms 00:30:13.418 [2024-12-06 18:28:23.734926] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:13.418 [2024-12-06 18:28:23.734986] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:13.418 [2024-12-06 18:28:23.735019] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Decorate bands 00:30:13.418 [2024-12-06 18:28:23.735050] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.010 ms 00:30:13.418 [2024-12-06 18:28:23.735137] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:13.418 [2024-12-06 18:28:23.735229] ftl_layout.c: 613:ftl_layout_setup: *NOTICE*: [FTL][ftl0] FTL layout setup mode 0 00:30:13.418 [2024-12-06 18:28:23.735298] upgrade/ftl_sb_v5.c: 278:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob load 0x150 bytes 00:30:13.418 [2024-12-06 18:28:23.735377] upgrade/ftl_sb_v5.c: 287:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] base layout blob load 0x48 bytes 00:30:13.418 [2024-12-06 18:28:23.735528] upgrade/ftl_sb_v5.c: 294:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] layout blob load 0x190 bytes 00:30:13.418 [2024-12-06 18:28:23.735654] upgrade/ftl_sb_v5.c: 92:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob store 0x150 bytes 00:30:13.418 [2024-12-06 18:28:23.735764] upgrade/ftl_sb_v5.c: 101:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] base layout blob store 0x48 bytes 00:30:13.418 [2024-12-06 18:28:23.735944] upgrade/ftl_sb_v5.c: 109:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] layout blob store 0x190 bytes 00:30:13.418 [2024-12-06 18:28:23.736002] ftl_layout.c: 685:ftl_layout_setup: *NOTICE*: [FTL][ftl0] Base device capacity: 103424.00 MiB 00:30:13.418 [2024-12-06 18:28:23.736052] ftl_layout.c: 687:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache device capacity: 5171.00 MiB 00:30:13.418 [2024-12-06 18:28:23.736065] ftl_layout.c: 689:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P entries: 20971520 00:30:13.418 [2024-12-06 18:28:23.736076] ftl_layout.c: 690:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P address size: 4 00:30:13.418 [2024-12-06 18:28:23.736086] ftl_layout.c: 691:ftl_layout_setup: *NOTICE*: [FTL][ftl0] P2L checkpoint pages: 2048 00:30:13.418 [2024-12-06 18:28:23.736096] ftl_layout.c: 692:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache chunk count 5 00:30:13.418 [2024-12-06 18:28:23.736107] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:13.418 [2024-12-06 18:28:23.736118] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize layout 00:30:13.418 [2024-12-06 18:28:23.736129] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.883 ms 00:30:13.418 [2024-12-06 18:28:23.736139] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:13.418 [2024-12-06 18:28:23.736221] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:13.418 [2024-12-06 18:28:23.736237] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Verify layout 00:30:13.418 [2024-12-06 18:28:23.736247] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.055 ms 00:30:13.418 [2024-12-06 18:28:23.736257] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:13.418 [2024-12-06 18:28:23.736373] ftl_layout.c: 768:ftl_layout_dump: *NOTICE*: [FTL][ftl0] NV cache layout: 00:30:13.418 [2024-12-06 18:28:23.736388] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb 00:30:13.418 [2024-12-06 18:28:23.736399] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:30:13.418 [2024-12-06 18:28:23.736409] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:30:13.418 [2024-12-06 18:28:23.736420] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region l2p 00:30:13.418 [2024-12-06 18:28:23.736429] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.12 MiB 00:30:13.418 [2024-12-06 18:28:23.736438] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 80.00 MiB 00:30:13.418 [2024-12-06 18:28:23.736448] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md 00:30:13.418 [2024-12-06 18:28:23.736457] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.12 MiB 00:30:13.418 [2024-12-06 18:28:23.736476] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:30:13.418 [2024-12-06 18:28:23.736485] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md_mirror 00:30:13.418 [2024-12-06 18:28:23.736494] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.62 MiB 00:30:13.418 [2024-12-06 18:28:23.736503] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:30:13.418 [2024-12-06 18:28:23.736513] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md 00:30:13.418 [2024-12-06 18:28:23.736522] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.88 MiB 00:30:13.418 [2024-12-06 18:28:23.736531] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:30:13.418 [2024-12-06 18:28:23.736540] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md_mirror 00:30:13.418 [2024-12-06 18:28:23.736549] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 114.00 MiB 00:30:13.418 [2024-12-06 18:28:23.736558] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:30:13.418 [2024-12-06 18:28:23.736571] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l0 00:30:13.418 [2024-12-06 18:28:23.736582] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 81.12 MiB 00:30:13.418 [2024-12-06 18:28:23.736591] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:30:13.418 [2024-12-06 18:28:23.736601] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l1 00:30:13.418 [2024-12-06 18:28:23.736610] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 89.12 MiB 00:30:13.418 [2024-12-06 18:28:23.736620] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:30:13.418 [2024-12-06 18:28:23.736629] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l2 00:30:13.418 [2024-12-06 18:28:23.736638] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 97.12 MiB 00:30:13.418 [2024-12-06 18:28:23.736647] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:30:13.418 [2024-12-06 18:28:23.736656] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l3 00:30:13.418 [2024-12-06 18:28:23.736665] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 105.12 MiB 00:30:13.418 [2024-12-06 18:28:23.736674] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:30:13.418 [2024-12-06 18:28:23.736683] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md 00:30:13.418 [2024-12-06 18:28:23.736692] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.12 MiB 00:30:13.418 [2024-12-06 18:28:23.736701] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:30:13.418 [2024-12-06 18:28:23.736710] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md_mirror 00:30:13.418 [2024-12-06 18:28:23.736719] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.38 MiB 00:30:13.418 [2024-12-06 18:28:23.736728] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:30:13.418 [2024-12-06 18:28:23.736737] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log 00:30:13.418 [2024-12-06 18:28:23.736746] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.62 MiB 00:30:13.418 [2024-12-06 18:28:23.736755] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:30:13.418 [2024-12-06 18:28:23.736764] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log_mirror 00:30:13.418 [2024-12-06 18:28:23.736773] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.75 MiB 00:30:13.418 [2024-12-06 18:28:23.736782] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:30:13.418 [2024-12-06 18:28:23.736791] ftl_layout.c: 775:ftl_layout_dump: *NOTICE*: [FTL][ftl0] Base device layout: 00:30:13.418 [2024-12-06 18:28:23.736801] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb_mirror 00:30:13.418 [2024-12-06 18:28:23.736814] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:30:13.418 [2024-12-06 18:28:23.736824] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:30:13.418 [2024-12-06 18:28:23.736834] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region vmap 00:30:13.418 [2024-12-06 18:28:23.736843] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 102400.25 MiB 00:30:13.418 [2024-12-06 18:28:23.736852] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 3.38 MiB 00:30:13.419 [2024-12-06 18:28:23.736861] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region data_btm 00:30:13.419 [2024-12-06 18:28:23.736872] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.25 MiB 00:30:13.419 [2024-12-06 18:28:23.736883] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 102400.00 MiB 00:30:13.419 [2024-12-06 18:28:23.736894] upgrade/ftl_sb_v5.c: 408:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - nvc: 00:30:13.419 [2024-12-06 18:28:23.736907] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x0 ver:5 blk_offs:0x0 blk_sz:0x20 00:30:13.419 [2024-12-06 18:28:23.736918] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x2 ver:0 blk_offs:0x20 blk_sz:0x5000 00:30:13.419 [2024-12-06 18:28:23.736929] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x3 ver:2 blk_offs:0x5020 blk_sz:0x80 00:30:13.419 [2024-12-06 18:28:23.736939] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x4 ver:2 blk_offs:0x50a0 blk_sz:0x80 00:30:13.419 [2024-12-06 18:28:23.736949] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xa ver:2 blk_offs:0x5120 blk_sz:0x800 00:30:13.419 [2024-12-06 18:28:23.736959] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xb ver:2 blk_offs:0x5920 blk_sz:0x800 00:30:13.419 [2024-12-06 18:28:23.736970] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xc ver:2 blk_offs:0x6120 blk_sz:0x800 00:30:13.419 [2024-12-06 18:28:23.736980] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xd ver:2 blk_offs:0x6920 blk_sz:0x800 00:30:13.419 [2024-12-06 18:28:23.736990] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xe ver:0 blk_offs:0x7120 blk_sz:0x40 00:30:13.419 [2024-12-06 18:28:23.737000] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xf ver:0 blk_offs:0x7160 blk_sz:0x40 00:30:13.419 [2024-12-06 18:28:23.737010] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x10 ver:1 blk_offs:0x71a0 blk_sz:0x20 00:30:13.419 [2024-12-06 18:28:23.737020] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x11 ver:1 blk_offs:0x71c0 blk_sz:0x20 00:30:13.419 [2024-12-06 18:28:23.737031] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x6 ver:2 blk_offs:0x71e0 blk_sz:0x20 00:30:13.419 [2024-12-06 18:28:23.737041] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x7 ver:2 blk_offs:0x7200 blk_sz:0x20 00:30:13.419 [2024-12-06 18:28:23.737051] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x7220 blk_sz:0x13c0e0 00:30:13.419 [2024-12-06 18:28:23.737061] upgrade/ftl_sb_v5.c: 422:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - base dev: 00:30:13.419 [2024-12-06 18:28:23.737072] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x1 ver:5 blk_offs:0x0 blk_sz:0x20 00:30:13.419 [2024-12-06 18:28:23.737082] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x20 blk_sz:0x20 00:30:13.419 [2024-12-06 18:28:23.737093] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x9 ver:0 blk_offs:0x40 blk_sz:0x1900000 00:30:13.419 [2024-12-06 18:28:23.737102] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x5 ver:0 blk_offs:0x1900040 blk_sz:0x360 00:30:13.419 [2024-12-06 18:28:23.737113] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x19003a0 blk_sz:0x3fc60 00:30:13.419 [2024-12-06 18:28:23.737123] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:13.419 [2024-12-06 18:28:23.737133] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Layout upgrade 00:30:13.419 [2024-12-06 18:28:23.737143] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.806 ms 00:30:13.419 [2024-12-06 18:28:23.737153] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:13.419 [2024-12-06 18:28:23.777238] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:13.419 [2024-12-06 18:28:23.777308] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:30:13.419 [2024-12-06 18:28:23.777342] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 40.099 ms 00:30:13.419 [2024-12-06 18:28:23.777353] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:13.419 [2024-12-06 18:28:23.777467] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:13.419 [2024-12-06 18:28:23.777479] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize band addresses 00:30:13.419 [2024-12-06 18:28:23.777490] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.056 ms 00:30:13.419 [2024-12-06 18:28:23.777500] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:13.419 [2024-12-06 18:28:23.834777] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:13.419 [2024-12-06 18:28:23.834837] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:30:13.419 [2024-12-06 18:28:23.834857] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 57.285 ms 00:30:13.419 [2024-12-06 18:28:23.834868] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:13.419 [2024-12-06 18:28:23.834930] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:13.419 [2024-12-06 18:28:23.834941] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:30:13.419 [2024-12-06 18:28:23.834952] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.003 ms 00:30:13.419 [2024-12-06 18:28:23.834962] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:13.419 [2024-12-06 18:28:23.835497] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:13.419 [2024-12-06 18:28:23.835513] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:30:13.419 [2024-12-06 18:28:23.835524] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.449 ms 00:30:13.419 [2024-12-06 18:28:23.835541] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:13.419 [2024-12-06 18:28:23.835663] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:13.419 [2024-12-06 18:28:23.835677] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:30:13.419 [2024-12-06 18:28:23.835688] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.097 ms 00:30:13.419 [2024-12-06 18:28:23.835698] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:13.419 [2024-12-06 18:28:23.855005] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:13.419 [2024-12-06 18:28:23.855307] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:30:13.419 [2024-12-06 18:28:23.855333] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 19.315 ms 00:30:13.419 [2024-12-06 18:28:23.855344] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:13.419 [2024-12-06 18:28:23.875410] ftl_nv_cache.c:1772:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: full chunks = 2, empty chunks = 2 00:30:13.419 [2024-12-06 18:28:23.875472] ftl_nv_cache.c:1776:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: state loaded successfully 00:30:13.419 [2024-12-06 18:28:23.875489] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:13.419 [2024-12-06 18:28:23.875500] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore NV cache metadata 00:30:13.419 [2024-12-06 18:28:23.875514] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 20.023 ms 00:30:13.419 [2024-12-06 18:28:23.875524] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:13.419 [2024-12-06 18:28:23.905490] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:13.419 [2024-12-06 18:28:23.905540] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore valid map metadata 00:30:13.419 [2024-12-06 18:28:23.905555] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 29.953 ms 00:30:13.419 [2024-12-06 18:28:23.905566] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:13.419 [2024-12-06 18:28:23.924670] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:13.419 [2024-12-06 18:28:23.924713] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore band info metadata 00:30:13.419 [2024-12-06 18:28:23.924728] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 19.059 ms 00:30:13.419 [2024-12-06 18:28:23.924738] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:13.707 [2024-12-06 18:28:23.943326] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:13.707 [2024-12-06 18:28:23.943370] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore trim metadata 00:30:13.707 [2024-12-06 18:28:23.943385] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 18.569 ms 00:30:13.707 [2024-12-06 18:28:23.943395] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:13.707 [2024-12-06 18:28:23.944197] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:13.707 [2024-12-06 18:28:23.944222] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize P2L checkpointing 00:30:13.707 [2024-12-06 18:28:23.944235] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.665 ms 00:30:13.707 [2024-12-06 18:28:23.944245] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:13.707 [2024-12-06 18:28:24.031861] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:13.707 [2024-12-06 18:28:24.031933] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore P2L checkpoints 00:30:13.707 [2024-12-06 18:28:24.031950] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 87.714 ms 00:30:13.707 [2024-12-06 18:28:24.031978] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:13.707 [2024-12-06 18:28:24.044177] ftl_l2p_cache.c: 458:ftl_l2p_cache_init: *NOTICE*: l2p maximum resident size is: 9 (of 10) MiB 00:30:13.707 [2024-12-06 18:28:24.047473] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:13.707 [2024-12-06 18:28:24.047510] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize L2P 00:30:13.707 [2024-12-06 18:28:24.047525] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 15.432 ms 00:30:13.707 [2024-12-06 18:28:24.047540] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:13.707 [2024-12-06 18:28:24.047650] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:13.707 [2024-12-06 18:28:24.047664] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore L2P 00:30:13.707 [2024-12-06 18:28:24.047675] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.006 ms 00:30:13.707 [2024-12-06 18:28:24.047685] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:13.707 [2024-12-06 18:28:24.047777] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:13.707 [2024-12-06 18:28:24.047790] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize band initialization 00:30:13.707 [2024-12-06 18:28:24.047801] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.033 ms 00:30:13.707 [2024-12-06 18:28:24.047810] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:13.707 [2024-12-06 18:28:24.047839] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:13.707 [2024-12-06 18:28:24.047850] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Start core poller 00:30:13.707 [2024-12-06 18:28:24.047861] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.005 ms 00:30:13.707 [2024-12-06 18:28:24.047871] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:13.707 [2024-12-06 18:28:24.047903] mngt/ftl_mngt_self_test.c: 208:ftl_mngt_self_test: *NOTICE*: [FTL][ftl0] Self test skipped 00:30:13.707 [2024-12-06 18:28:24.047914] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:13.707 [2024-12-06 18:28:24.047925] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Self test on startup 00:30:13.707 [2024-12-06 18:28:24.047935] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.013 ms 00:30:13.707 [2024-12-06 18:28:24.047949] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:13.707 [2024-12-06 18:28:24.085212] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:13.707 [2024-12-06 18:28:24.085424] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL dirty state 00:30:13.707 [2024-12-06 18:28:24.085448] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 37.300 ms 00:30:13.707 [2024-12-06 18:28:24.085459] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:13.707 [2024-12-06 18:28:24.085554] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:13.707 [2024-12-06 18:28:24.085568] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize initialization 00:30:13.707 [2024-12-06 18:28:24.085580] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.033 ms 00:30:13.707 [2024-12-06 18:28:24.085590] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:13.707 [2024-12-06 18:28:24.086701] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL startup', duration = 386.650 ms, result 0 00:30:14.650  [2024-12-06T18:28:26.173Z] Copying: 26/1024 [MB] (26 MBps) [2024-12-06T18:28:27.108Z] Copying: 53/1024 [MB] (26 MBps) [2024-12-06T18:28:28.484Z] Copying: 80/1024 [MB] (26 MBps) [2024-12-06T18:28:29.421Z] Copying: 107/1024 [MB] (26 MBps) [2024-12-06T18:28:30.387Z] Copying: 134/1024 [MB] (26 MBps) [2024-12-06T18:28:31.324Z] Copying: 161/1024 [MB] (26 MBps) [2024-12-06T18:28:32.263Z] Copying: 187/1024 [MB] (26 MBps) [2024-12-06T18:28:33.201Z] Copying: 213/1024 [MB] (26 MBps) [2024-12-06T18:28:34.137Z] Copying: 239/1024 [MB] (26 MBps) [2024-12-06T18:28:35.511Z] Copying: 267/1024 [MB] (27 MBps) [2024-12-06T18:28:36.444Z] Copying: 295/1024 [MB] (28 MBps) [2024-12-06T18:28:37.381Z] Copying: 323/1024 [MB] (27 MBps) [2024-12-06T18:28:38.316Z] Copying: 350/1024 [MB] (27 MBps) [2024-12-06T18:28:39.250Z] Copying: 376/1024 [MB] (26 MBps) [2024-12-06T18:28:40.186Z] Copying: 401/1024 [MB] (25 MBps) [2024-12-06T18:28:41.120Z] Copying: 428/1024 [MB] (26 MBps) [2024-12-06T18:28:42.098Z] Copying: 455/1024 [MB] (27 MBps) [2024-12-06T18:28:43.478Z] Copying: 480/1024 [MB] (25 MBps) [2024-12-06T18:28:44.415Z] Copying: 506/1024 [MB] (25 MBps) [2024-12-06T18:28:45.352Z] Copying: 531/1024 [MB] (25 MBps) [2024-12-06T18:28:46.287Z] Copying: 558/1024 [MB] (26 MBps) [2024-12-06T18:28:47.223Z] Copying: 584/1024 [MB] (26 MBps) [2024-12-06T18:28:48.161Z] Copying: 611/1024 [MB] (26 MBps) [2024-12-06T18:28:49.100Z] Copying: 637/1024 [MB] (26 MBps) [2024-12-06T18:28:50.485Z] Copying: 664/1024 [MB] (27 MBps) [2024-12-06T18:28:51.421Z] Copying: 692/1024 [MB] (27 MBps) [2024-12-06T18:28:52.357Z] Copying: 710/1024 [MB] (18 MBps) [2024-12-06T18:28:53.292Z] Copying: 733/1024 [MB] (23 MBps) [2024-12-06T18:28:54.280Z] Copying: 757/1024 [MB] (23 MBps) [2024-12-06T18:28:55.237Z] Copying: 781/1024 [MB] (24 MBps) [2024-12-06T18:28:56.172Z] Copying: 806/1024 [MB] (24 MBps) [2024-12-06T18:28:57.150Z] Copying: 831/1024 [MB] (24 MBps) [2024-12-06T18:28:58.087Z] Copying: 861/1024 [MB] (30 MBps) [2024-12-06T18:28:59.463Z] Copying: 893/1024 [MB] (32 MBps) [2024-12-06T18:29:00.401Z] Copying: 922/1024 [MB] (28 MBps) [2024-12-06T18:29:01.388Z] Copying: 948/1024 [MB] (26 MBps) [2024-12-06T18:29:02.327Z] Copying: 975/1024 [MB] (26 MBps) [2024-12-06T18:29:03.267Z] Copying: 1000/1024 [MB] (25 MBps) [2024-12-06T18:29:03.267Z] Copying: 1024/1024 [MB] (average 26 MBps)[2024-12-06 18:29:02.990477] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:52.691 [2024-12-06 18:29:02.990787] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinit core IO channel 00:30:52.691 [2024-12-06 18:29:02.990898] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.003 ms 00:30:52.691 [2024-12-06 18:29:02.990945] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:52.691 [2024-12-06 18:29:02.990997] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:30:52.691 [2024-12-06 18:29:02.995750] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:52.691 [2024-12-06 18:29:02.995785] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Unregister IO device 00:30:52.691 [2024-12-06 18:29:02.995799] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.739 ms 00:30:52.691 [2024-12-06 18:29:02.995809] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:52.691 [2024-12-06 18:29:03.001079] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:52.691 [2024-12-06 18:29:03.001119] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Stop core poller 00:30:52.691 [2024-12-06 18:29:03.001132] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 5.247 ms 00:30:52.691 [2024-12-06 18:29:03.001142] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:52.691 [2024-12-06 18:29:03.021657] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:52.691 [2024-12-06 18:29:03.021698] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist L2P 00:30:52.691 [2024-12-06 18:29:03.021713] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 20.531 ms 00:30:52.691 [2024-12-06 18:29:03.021724] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:52.691 [2024-12-06 18:29:03.026796] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:52.691 [2024-12-06 18:29:03.026837] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finish L2P trims 00:30:52.691 [2024-12-06 18:29:03.026849] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 5.047 ms 00:30:52.691 [2024-12-06 18:29:03.026859] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:52.691 [2024-12-06 18:29:03.063903] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:52.691 [2024-12-06 18:29:03.064073] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist NV cache metadata 00:30:52.691 [2024-12-06 18:29:03.064097] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 37.043 ms 00:30:52.691 [2024-12-06 18:29:03.064111] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:52.691 [2024-12-06 18:29:03.088163] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:52.691 [2024-12-06 18:29:03.088207] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist valid map metadata 00:30:52.691 [2024-12-06 18:29:03.088223] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 24.046 ms 00:30:52.691 [2024-12-06 18:29:03.088234] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:52.691 [2024-12-06 18:29:03.089712] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:52.691 [2024-12-06 18:29:03.089866] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist P2L metadata 00:30:52.691 [2024-12-06 18:29:03.089894] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.423 ms 00:30:52.691 [2024-12-06 18:29:03.089906] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:52.691 [2024-12-06 18:29:03.129220] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:52.691 [2024-12-06 18:29:03.129294] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist band info metadata 00:30:52.691 [2024-12-06 18:29:03.129311] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 39.352 ms 00:30:52.691 [2024-12-06 18:29:03.129334] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:52.691 [2024-12-06 18:29:03.169709] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:52.691 [2024-12-06 18:29:03.169755] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist trim metadata 00:30:52.691 [2024-12-06 18:29:03.169769] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 40.386 ms 00:30:52.691 [2024-12-06 18:29:03.169779] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:52.691 [2024-12-06 18:29:03.205404] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:52.691 [2024-12-06 18:29:03.205554] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist superblock 00:30:52.691 [2024-12-06 18:29:03.205575] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 35.639 ms 00:30:52.691 [2024-12-06 18:29:03.205587] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:52.691 [2024-12-06 18:29:03.241864] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:52.691 [2024-12-06 18:29:03.241913] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL clean state 00:30:52.691 [2024-12-06 18:29:03.241928] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 36.256 ms 00:30:52.691 [2024-12-06 18:29:03.241937] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:52.691 [2024-12-06 18:29:03.241976] ftl_debug.c: 165:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Bands validity: 00:30:52.691 [2024-12-06 18:29:03.241993] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 1: 1024 / 261120 wr_cnt: 1 state: open 00:30:52.691 [2024-12-06 18:29:03.242007] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 2: 0 / 261120 wr_cnt: 0 state: free 00:30:52.691 [2024-12-06 18:29:03.242018] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 3: 0 / 261120 wr_cnt: 0 state: free 00:30:52.691 [2024-12-06 18:29:03.242029] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 4: 0 / 261120 wr_cnt: 0 state: free 00:30:52.691 [2024-12-06 18:29:03.242040] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 5: 0 / 261120 wr_cnt: 0 state: free 00:30:52.692 [2024-12-06 18:29:03.242050] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 6: 0 / 261120 wr_cnt: 0 state: free 00:30:52.692 [2024-12-06 18:29:03.242061] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 7: 0 / 261120 wr_cnt: 0 state: free 00:30:52.692 [2024-12-06 18:29:03.242072] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 8: 0 / 261120 wr_cnt: 0 state: free 00:30:52.692 [2024-12-06 18:29:03.242083] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 9: 0 / 261120 wr_cnt: 0 state: free 00:30:52.692 [2024-12-06 18:29:03.242094] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 10: 0 / 261120 wr_cnt: 0 state: free 00:30:52.692 [2024-12-06 18:29:03.242105] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 11: 0 / 261120 wr_cnt: 0 state: free 00:30:52.692 [2024-12-06 18:29:03.242115] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 12: 0 / 261120 wr_cnt: 0 state: free 00:30:52.692 [2024-12-06 18:29:03.242125] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 13: 0 / 261120 wr_cnt: 0 state: free 00:30:52.692 [2024-12-06 18:29:03.242136] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 14: 0 / 261120 wr_cnt: 0 state: free 00:30:52.692 [2024-12-06 18:29:03.242146] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 15: 0 / 261120 wr_cnt: 0 state: free 00:30:52.692 [2024-12-06 18:29:03.242156] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 16: 0 / 261120 wr_cnt: 0 state: free 00:30:52.692 [2024-12-06 18:29:03.242167] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 17: 0 / 261120 wr_cnt: 0 state: free 00:30:52.692 [2024-12-06 18:29:03.242178] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 18: 0 / 261120 wr_cnt: 0 state: free 00:30:52.692 [2024-12-06 18:29:03.242189] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 19: 0 / 261120 wr_cnt: 0 state: free 00:30:52.692 [2024-12-06 18:29:03.242199] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 20: 0 / 261120 wr_cnt: 0 state: free 00:30:52.692 [2024-12-06 18:29:03.242210] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 21: 0 / 261120 wr_cnt: 0 state: free 00:30:52.692 [2024-12-06 18:29:03.242220] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 22: 0 / 261120 wr_cnt: 0 state: free 00:30:52.692 [2024-12-06 18:29:03.242231] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 23: 0 / 261120 wr_cnt: 0 state: free 00:30:52.692 [2024-12-06 18:29:03.242241] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 24: 0 / 261120 wr_cnt: 0 state: free 00:30:52.692 [2024-12-06 18:29:03.242251] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 25: 0 / 261120 wr_cnt: 0 state: free 00:30:52.692 [2024-12-06 18:29:03.242261] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 26: 0 / 261120 wr_cnt: 0 state: free 00:30:52.692 [2024-12-06 18:29:03.242292] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 27: 0 / 261120 wr_cnt: 0 state: free 00:30:52.692 [2024-12-06 18:29:03.242303] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 28: 0 / 261120 wr_cnt: 0 state: free 00:30:52.692 [2024-12-06 18:29:03.242314] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 29: 0 / 261120 wr_cnt: 0 state: free 00:30:52.692 [2024-12-06 18:29:03.242326] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 30: 0 / 261120 wr_cnt: 0 state: free 00:30:52.692 [2024-12-06 18:29:03.242337] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 31: 0 / 261120 wr_cnt: 0 state: free 00:30:52.692 [2024-12-06 18:29:03.242348] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 32: 0 / 261120 wr_cnt: 0 state: free 00:30:52.692 [2024-12-06 18:29:03.242359] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 33: 0 / 261120 wr_cnt: 0 state: free 00:30:52.692 [2024-12-06 18:29:03.242369] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 34: 0 / 261120 wr_cnt: 0 state: free 00:30:52.692 [2024-12-06 18:29:03.242387] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 35: 0 / 261120 wr_cnt: 0 state: free 00:30:52.692 [2024-12-06 18:29:03.242398] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 36: 0 / 261120 wr_cnt: 0 state: free 00:30:52.692 [2024-12-06 18:29:03.242409] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 37: 0 / 261120 wr_cnt: 0 state: free 00:30:52.692 [2024-12-06 18:29:03.242420] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 38: 0 / 261120 wr_cnt: 0 state: free 00:30:52.692 [2024-12-06 18:29:03.242431] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 39: 0 / 261120 wr_cnt: 0 state: free 00:30:52.692 [2024-12-06 18:29:03.242441] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 40: 0 / 261120 wr_cnt: 0 state: free 00:30:52.692 [2024-12-06 18:29:03.242452] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 41: 0 / 261120 wr_cnt: 0 state: free 00:30:52.692 [2024-12-06 18:29:03.242463] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 42: 0 / 261120 wr_cnt: 0 state: free 00:30:52.692 [2024-12-06 18:29:03.242475] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 43: 0 / 261120 wr_cnt: 0 state: free 00:30:52.692 [2024-12-06 18:29:03.242485] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 44: 0 / 261120 wr_cnt: 0 state: free 00:30:52.692 [2024-12-06 18:29:03.242496] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 45: 0 / 261120 wr_cnt: 0 state: free 00:30:52.692 [2024-12-06 18:29:03.242506] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 46: 0 / 261120 wr_cnt: 0 state: free 00:30:52.692 [2024-12-06 18:29:03.242518] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 47: 0 / 261120 wr_cnt: 0 state: free 00:30:52.692 [2024-12-06 18:29:03.242528] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 48: 0 / 261120 wr_cnt: 0 state: free 00:30:52.692 [2024-12-06 18:29:03.242542] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 49: 0 / 261120 wr_cnt: 0 state: free 00:30:52.692 [2024-12-06 18:29:03.242552] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 50: 0 / 261120 wr_cnt: 0 state: free 00:30:52.692 [2024-12-06 18:29:03.242563] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 51: 0 / 261120 wr_cnt: 0 state: free 00:30:52.692 [2024-12-06 18:29:03.242573] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 52: 0 / 261120 wr_cnt: 0 state: free 00:30:52.692 [2024-12-06 18:29:03.242584] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 53: 0 / 261120 wr_cnt: 0 state: free 00:30:52.692 [2024-12-06 18:29:03.242594] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 54: 0 / 261120 wr_cnt: 0 state: free 00:30:52.692 [2024-12-06 18:29:03.242604] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 55: 0 / 261120 wr_cnt: 0 state: free 00:30:52.692 [2024-12-06 18:29:03.242614] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 56: 0 / 261120 wr_cnt: 0 state: free 00:30:52.692 [2024-12-06 18:29:03.242625] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 57: 0 / 261120 wr_cnt: 0 state: free 00:30:52.692 [2024-12-06 18:29:03.242636] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 58: 0 / 261120 wr_cnt: 0 state: free 00:30:52.692 [2024-12-06 18:29:03.242647] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 59: 0 / 261120 wr_cnt: 0 state: free 00:30:52.692 [2024-12-06 18:29:03.242657] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 60: 0 / 261120 wr_cnt: 0 state: free 00:30:52.692 [2024-12-06 18:29:03.242667] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 61: 0 / 261120 wr_cnt: 0 state: free 00:30:52.692 [2024-12-06 18:29:03.242678] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 62: 0 / 261120 wr_cnt: 0 state: free 00:30:52.692 [2024-12-06 18:29:03.242689] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 63: 0 / 261120 wr_cnt: 0 state: free 00:30:52.692 [2024-12-06 18:29:03.242700] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 64: 0 / 261120 wr_cnt: 0 state: free 00:30:52.692 [2024-12-06 18:29:03.242710] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 65: 0 / 261120 wr_cnt: 0 state: free 00:30:52.692 [2024-12-06 18:29:03.242720] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 66: 0 / 261120 wr_cnt: 0 state: free 00:30:52.692 [2024-12-06 18:29:03.242731] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 67: 0 / 261120 wr_cnt: 0 state: free 00:30:52.692 [2024-12-06 18:29:03.242742] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 68: 0 / 261120 wr_cnt: 0 state: free 00:30:52.692 [2024-12-06 18:29:03.242753] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 69: 0 / 261120 wr_cnt: 0 state: free 00:30:52.692 [2024-12-06 18:29:03.242763] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 70: 0 / 261120 wr_cnt: 0 state: free 00:30:52.692 [2024-12-06 18:29:03.242774] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 71: 0 / 261120 wr_cnt: 0 state: free 00:30:52.692 [2024-12-06 18:29:03.242785] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 72: 0 / 261120 wr_cnt: 0 state: free 00:30:52.692 [2024-12-06 18:29:03.242796] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 73: 0 / 261120 wr_cnt: 0 state: free 00:30:52.692 [2024-12-06 18:29:03.242806] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 74: 0 / 261120 wr_cnt: 0 state: free 00:30:52.692 [2024-12-06 18:29:03.242817] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 75: 0 / 261120 wr_cnt: 0 state: free 00:30:52.692 [2024-12-06 18:29:03.242828] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 76: 0 / 261120 wr_cnt: 0 state: free 00:30:52.692 [2024-12-06 18:29:03.242838] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 77: 0 / 261120 wr_cnt: 0 state: free 00:30:52.692 [2024-12-06 18:29:03.242848] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 78: 0 / 261120 wr_cnt: 0 state: free 00:30:52.692 [2024-12-06 18:29:03.242858] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 79: 0 / 261120 wr_cnt: 0 state: free 00:30:52.692 [2024-12-06 18:29:03.242868] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 80: 0 / 261120 wr_cnt: 0 state: free 00:30:52.692 [2024-12-06 18:29:03.242878] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 81: 0 / 261120 wr_cnt: 0 state: free 00:30:52.692 [2024-12-06 18:29:03.242889] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 82: 0 / 261120 wr_cnt: 0 state: free 00:30:52.692 [2024-12-06 18:29:03.242899] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 83: 0 / 261120 wr_cnt: 0 state: free 00:30:52.692 [2024-12-06 18:29:03.242909] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 84: 0 / 261120 wr_cnt: 0 state: free 00:30:52.692 [2024-12-06 18:29:03.242919] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 85: 0 / 261120 wr_cnt: 0 state: free 00:30:52.692 [2024-12-06 18:29:03.242929] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 86: 0 / 261120 wr_cnt: 0 state: free 00:30:52.692 [2024-12-06 18:29:03.242939] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 87: 0 / 261120 wr_cnt: 0 state: free 00:30:52.692 [2024-12-06 18:29:03.242949] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 88: 0 / 261120 wr_cnt: 0 state: free 00:30:52.692 [2024-12-06 18:29:03.242960] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 89: 0 / 261120 wr_cnt: 0 state: free 00:30:52.692 [2024-12-06 18:29:03.242971] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 90: 0 / 261120 wr_cnt: 0 state: free 00:30:52.692 [2024-12-06 18:29:03.242981] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 91: 0 / 261120 wr_cnt: 0 state: free 00:30:52.692 [2024-12-06 18:29:03.242991] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 92: 0 / 261120 wr_cnt: 0 state: free 00:30:52.692 [2024-12-06 18:29:03.243002] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 93: 0 / 261120 wr_cnt: 0 state: free 00:30:52.693 [2024-12-06 18:29:03.243013] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 94: 0 / 261120 wr_cnt: 0 state: free 00:30:52.693 [2024-12-06 18:29:03.243023] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 95: 0 / 261120 wr_cnt: 0 state: free 00:30:52.693 [2024-12-06 18:29:03.243034] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 96: 0 / 261120 wr_cnt: 0 state: free 00:30:52.693 [2024-12-06 18:29:03.243044] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 97: 0 / 261120 wr_cnt: 0 state: free 00:30:52.693 [2024-12-06 18:29:03.243054] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 98: 0 / 261120 wr_cnt: 0 state: free 00:30:52.693 [2024-12-06 18:29:03.243065] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 99: 0 / 261120 wr_cnt: 0 state: free 00:30:52.693 [2024-12-06 18:29:03.243076] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 100: 0 / 261120 wr_cnt: 0 state: free 00:30:52.693 [2024-12-06 18:29:03.243094] ftl_debug.c: 211:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] 00:30:52.693 [2024-12-06 18:29:03.243104] ftl_debug.c: 212:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] device UUID: 0fd165a4-a7ae-4b12-8280-ae23d7c38836 00:30:52.693 [2024-12-06 18:29:03.243127] ftl_debug.c: 213:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total valid LBAs: 1024 00:30:52.693 [2024-12-06 18:29:03.243142] ftl_debug.c: 214:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total writes: 1984 00:30:52.693 [2024-12-06 18:29:03.243152] ftl_debug.c: 215:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] user writes: 1024 00:30:52.693 [2024-12-06 18:29:03.243162] ftl_debug.c: 216:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] WAF: 1.9375 00:30:52.693 [2024-12-06 18:29:03.243172] ftl_debug.c: 218:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] limits: 00:30:52.693 [2024-12-06 18:29:03.243182] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] crit: 0 00:30:52.693 [2024-12-06 18:29:03.243192] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] high: 0 00:30:52.693 [2024-12-06 18:29:03.243201] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] low: 0 00:30:52.693 [2024-12-06 18:29:03.243214] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] start: 0 00:30:52.693 [2024-12-06 18:29:03.243224] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:52.693 [2024-12-06 18:29:03.243235] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Dump statistics 00:30:52.693 [2024-12-06 18:29:03.243245] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.252 ms 00:30:52.693 [2024-12-06 18:29:03.243255] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:52.693 [2024-12-06 18:29:03.262958] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:52.693 [2024-12-06 18:29:03.263002] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize L2P 00:30:52.693 [2024-12-06 18:29:03.263015] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 19.688 ms 00:30:52.693 [2024-12-06 18:29:03.263025] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:52.693 [2024-12-06 18:29:03.263616] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:52.693 [2024-12-06 18:29:03.263638] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize P2L checkpointing 00:30:52.693 [2024-12-06 18:29:03.263649] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.568 ms 00:30:52.693 [2024-12-06 18:29:03.263666] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:52.952 [2024-12-06 18:29:03.314320] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:30:52.952 [2024-12-06 18:29:03.314388] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:30:52.952 [2024-12-06 18:29:03.314404] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:30:52.952 [2024-12-06 18:29:03.314415] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:52.952 [2024-12-06 18:29:03.314482] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:30:52.952 [2024-12-06 18:29:03.314494] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:30:52.952 [2024-12-06 18:29:03.314504] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:30:52.952 [2024-12-06 18:29:03.314520] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:52.952 [2024-12-06 18:29:03.314623] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:30:52.952 [2024-12-06 18:29:03.314638] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:30:52.952 [2024-12-06 18:29:03.314648] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:30:52.952 [2024-12-06 18:29:03.314658] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:52.952 [2024-12-06 18:29:03.314676] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:30:52.952 [2024-12-06 18:29:03.314687] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:30:52.952 [2024-12-06 18:29:03.314697] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:30:52.952 [2024-12-06 18:29:03.314707] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:52.952 [2024-12-06 18:29:03.438001] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:30:52.952 [2024-12-06 18:29:03.438068] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:30:52.952 [2024-12-06 18:29:03.438095] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:30:52.952 [2024-12-06 18:29:03.438106] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:53.210 [2024-12-06 18:29:03.539122] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:30:53.210 [2024-12-06 18:29:03.539186] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:30:53.210 [2024-12-06 18:29:03.539201] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:30:53.210 [2024-12-06 18:29:03.539212] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:53.210 [2024-12-06 18:29:03.539328] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:30:53.210 [2024-12-06 18:29:03.539341] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:30:53.210 [2024-12-06 18:29:03.539352] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:30:53.210 [2024-12-06 18:29:03.539362] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:53.210 [2024-12-06 18:29:03.539429] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:30:53.210 [2024-12-06 18:29:03.539442] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:30:53.210 [2024-12-06 18:29:03.539452] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:30:53.210 [2024-12-06 18:29:03.539462] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:53.210 [2024-12-06 18:29:03.539575] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:30:53.210 [2024-12-06 18:29:03.539592] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:30:53.210 [2024-12-06 18:29:03.539602] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:30:53.210 [2024-12-06 18:29:03.539613] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:53.210 [2024-12-06 18:29:03.539649] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:30:53.210 [2024-12-06 18:29:03.539661] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize superblock 00:30:53.210 [2024-12-06 18:29:03.539671] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:30:53.210 [2024-12-06 18:29:03.539681] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:53.210 [2024-12-06 18:29:03.539718] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:30:53.210 [2024-12-06 18:29:03.539733] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:30:53.210 [2024-12-06 18:29:03.539743] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:30:53.210 [2024-12-06 18:29:03.539753] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:53.210 [2024-12-06 18:29:03.539792] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:30:53.210 [2024-12-06 18:29:03.539804] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:30:53.210 [2024-12-06 18:29:03.539814] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:30:53.210 [2024-12-06 18:29:03.539824] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:53.210 [2024-12-06 18:29:03.539940] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL shutdown', duration = 550.338 ms, result 0 00:30:54.146 00:30:54.146 00:30:54.146 18:29:04 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@90 -- # md5sum /home/vagrant/spdk_repo/spdk/test/ftl/testfile2 00:30:56.061 18:29:06 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@93 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=ftl0 --of=/home/vagrant/spdk_repo/spdk/test/ftl/testfile --count=262144 --json=/home/vagrant/spdk_repo/spdk/test/ftl/config/ftl.json 00:30:56.061 [2024-12-06 18:29:06.404328] Starting SPDK v25.01-pre git sha1 1148849d6 / DPDK 24.03.0 initialization... 00:30:56.061 [2024-12-06 18:29:06.404587] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid82336 ] 00:30:56.061 [2024-12-06 18:29:06.584878] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:30:56.319 [2024-12-06 18:29:06.698775] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:30:56.578 [2024-12-06 18:29:07.071954] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:30:56.578 [2024-12-06 18:29:07.072255] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:30:56.850 [2024-12-06 18:29:07.233053] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:56.850 [2024-12-06 18:29:07.233112] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Check configuration 00:30:56.850 [2024-12-06 18:29:07.233129] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.004 ms 00:30:56.850 [2024-12-06 18:29:07.233140] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:56.850 [2024-12-06 18:29:07.233190] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:56.850 [2024-12-06 18:29:07.233205] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:30:56.850 [2024-12-06 18:29:07.233216] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.029 ms 00:30:56.850 [2024-12-06 18:29:07.233225] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:56.850 [2024-12-06 18:29:07.233247] mngt/ftl_mngt_bdev.c: 196:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using nvc0n1p0 as write buffer cache 00:30:56.850 [2024-12-06 18:29:07.234220] mngt/ftl_mngt_bdev.c: 236:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using bdev as NV Cache device 00:30:56.850 [2024-12-06 18:29:07.234248] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:56.850 [2024-12-06 18:29:07.234258] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:30:56.850 [2024-12-06 18:29:07.234279] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.007 ms 00:30:56.850 [2024-12-06 18:29:07.234289] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:56.850 [2024-12-06 18:29:07.235720] mngt/ftl_mngt_md.c: 455:ftl_mngt_load_sb: *NOTICE*: [FTL][ftl0] SHM: clean 0, shm_clean 0 00:30:56.850 [2024-12-06 18:29:07.254848] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:56.850 [2024-12-06 18:29:07.255007] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Load super block 00:30:56.850 [2024-12-06 18:29:07.255030] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 19.159 ms 00:30:56.850 [2024-12-06 18:29:07.255042] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:56.850 [2024-12-06 18:29:07.255109] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:56.850 [2024-12-06 18:29:07.255123] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Validate super block 00:30:56.850 [2024-12-06 18:29:07.255134] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.025 ms 00:30:56.850 [2024-12-06 18:29:07.255144] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:56.850 [2024-12-06 18:29:07.261901] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:56.850 [2024-12-06 18:29:07.262043] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:30:56.850 [2024-12-06 18:29:07.262063] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 6.694 ms 00:30:56.850 [2024-12-06 18:29:07.262079] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:56.850 [2024-12-06 18:29:07.262160] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:56.850 [2024-12-06 18:29:07.262173] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:30:56.850 [2024-12-06 18:29:07.262184] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.058 ms 00:30:56.850 [2024-12-06 18:29:07.262194] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:56.850 [2024-12-06 18:29:07.262237] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:56.850 [2024-12-06 18:29:07.262249] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Register IO device 00:30:56.850 [2024-12-06 18:29:07.262259] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.008 ms 00:30:56.850 [2024-12-06 18:29:07.262282] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:56.850 [2024-12-06 18:29:07.262311] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:30:56.850 [2024-12-06 18:29:07.267086] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:56.850 [2024-12-06 18:29:07.267119] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:30:56.850 [2024-12-06 18:29:07.267134] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.789 ms 00:30:56.850 [2024-12-06 18:29:07.267144] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:56.850 [2024-12-06 18:29:07.267177] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:56.850 [2024-12-06 18:29:07.267189] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Decorate bands 00:30:56.850 [2024-12-06 18:29:07.267199] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.009 ms 00:30:56.850 [2024-12-06 18:29:07.267209] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:56.850 [2024-12-06 18:29:07.267281] ftl_layout.c: 613:ftl_layout_setup: *NOTICE*: [FTL][ftl0] FTL layout setup mode 0 00:30:56.850 [2024-12-06 18:29:07.267307] upgrade/ftl_sb_v5.c: 278:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob load 0x150 bytes 00:30:56.850 [2024-12-06 18:29:07.267341] upgrade/ftl_sb_v5.c: 287:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] base layout blob load 0x48 bytes 00:30:56.850 [2024-12-06 18:29:07.267368] upgrade/ftl_sb_v5.c: 294:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] layout blob load 0x190 bytes 00:30:56.850 [2024-12-06 18:29:07.267458] upgrade/ftl_sb_v5.c: 92:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob store 0x150 bytes 00:30:56.850 [2024-12-06 18:29:07.267472] upgrade/ftl_sb_v5.c: 101:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] base layout blob store 0x48 bytes 00:30:56.850 [2024-12-06 18:29:07.267484] upgrade/ftl_sb_v5.c: 109:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] layout blob store 0x190 bytes 00:30:56.850 [2024-12-06 18:29:07.267497] ftl_layout.c: 685:ftl_layout_setup: *NOTICE*: [FTL][ftl0] Base device capacity: 103424.00 MiB 00:30:56.850 [2024-12-06 18:29:07.267509] ftl_layout.c: 687:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache device capacity: 5171.00 MiB 00:30:56.850 [2024-12-06 18:29:07.267520] ftl_layout.c: 689:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P entries: 20971520 00:30:56.850 [2024-12-06 18:29:07.267530] ftl_layout.c: 690:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P address size: 4 00:30:56.850 [2024-12-06 18:29:07.267543] ftl_layout.c: 691:ftl_layout_setup: *NOTICE*: [FTL][ftl0] P2L checkpoint pages: 2048 00:30:56.850 [2024-12-06 18:29:07.267553] ftl_layout.c: 692:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache chunk count 5 00:30:56.850 [2024-12-06 18:29:07.267564] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:56.850 [2024-12-06 18:29:07.267573] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize layout 00:30:56.850 [2024-12-06 18:29:07.267584] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.303 ms 00:30:56.850 [2024-12-06 18:29:07.267593] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:56.850 [2024-12-06 18:29:07.267666] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:56.850 [2024-12-06 18:29:07.267677] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Verify layout 00:30:56.850 [2024-12-06 18:29:07.267686] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.056 ms 00:30:56.850 [2024-12-06 18:29:07.267696] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:56.850 [2024-12-06 18:29:07.267793] ftl_layout.c: 768:ftl_layout_dump: *NOTICE*: [FTL][ftl0] NV cache layout: 00:30:56.850 [2024-12-06 18:29:07.267807] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb 00:30:56.850 [2024-12-06 18:29:07.267817] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:30:56.850 [2024-12-06 18:29:07.267827] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:30:56.850 [2024-12-06 18:29:07.267837] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region l2p 00:30:56.850 [2024-12-06 18:29:07.267846] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.12 MiB 00:30:56.850 [2024-12-06 18:29:07.267856] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 80.00 MiB 00:30:56.850 [2024-12-06 18:29:07.267865] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md 00:30:56.850 [2024-12-06 18:29:07.267875] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.12 MiB 00:30:56.850 [2024-12-06 18:29:07.267884] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:30:56.850 [2024-12-06 18:29:07.267893] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md_mirror 00:30:56.850 [2024-12-06 18:29:07.267904] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.62 MiB 00:30:56.850 [2024-12-06 18:29:07.267913] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:30:56.850 [2024-12-06 18:29:07.267931] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md 00:30:56.850 [2024-12-06 18:29:07.267940] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.88 MiB 00:30:56.850 [2024-12-06 18:29:07.267950] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:30:56.850 [2024-12-06 18:29:07.267960] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md_mirror 00:30:56.850 [2024-12-06 18:29:07.267969] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 114.00 MiB 00:30:56.850 [2024-12-06 18:29:07.267978] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:30:56.850 [2024-12-06 18:29:07.267989] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l0 00:30:56.850 [2024-12-06 18:29:07.267998] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 81.12 MiB 00:30:56.850 [2024-12-06 18:29:07.268008] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:30:56.850 [2024-12-06 18:29:07.268017] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l1 00:30:56.850 [2024-12-06 18:29:07.268026] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 89.12 MiB 00:30:56.850 [2024-12-06 18:29:07.268035] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:30:56.850 [2024-12-06 18:29:07.268044] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l2 00:30:56.850 [2024-12-06 18:29:07.268053] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 97.12 MiB 00:30:56.850 [2024-12-06 18:29:07.268062] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:30:56.850 [2024-12-06 18:29:07.268071] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l3 00:30:56.850 [2024-12-06 18:29:07.268080] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 105.12 MiB 00:30:56.850 [2024-12-06 18:29:07.268092] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:30:56.850 [2024-12-06 18:29:07.268101] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md 00:30:56.850 [2024-12-06 18:29:07.268110] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.12 MiB 00:30:56.850 [2024-12-06 18:29:07.268119] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:30:56.851 [2024-12-06 18:29:07.268127] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md_mirror 00:30:56.851 [2024-12-06 18:29:07.268137] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.38 MiB 00:30:56.851 [2024-12-06 18:29:07.268146] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:30:56.851 [2024-12-06 18:29:07.268156] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log 00:30:56.851 [2024-12-06 18:29:07.268165] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.62 MiB 00:30:56.851 [2024-12-06 18:29:07.268174] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:30:56.851 [2024-12-06 18:29:07.268183] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log_mirror 00:30:56.851 [2024-12-06 18:29:07.268192] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.75 MiB 00:30:56.851 [2024-12-06 18:29:07.268201] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:30:56.851 [2024-12-06 18:29:07.268211] ftl_layout.c: 775:ftl_layout_dump: *NOTICE*: [FTL][ftl0] Base device layout: 00:30:56.851 [2024-12-06 18:29:07.268221] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb_mirror 00:30:56.851 [2024-12-06 18:29:07.268231] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:30:56.851 [2024-12-06 18:29:07.268240] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:30:56.851 [2024-12-06 18:29:07.268250] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region vmap 00:30:56.851 [2024-12-06 18:29:07.268260] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 102400.25 MiB 00:30:56.851 [2024-12-06 18:29:07.268281] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 3.38 MiB 00:30:56.851 [2024-12-06 18:29:07.268291] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region data_btm 00:30:56.851 [2024-12-06 18:29:07.268300] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.25 MiB 00:30:56.851 [2024-12-06 18:29:07.268309] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 102400.00 MiB 00:30:56.851 [2024-12-06 18:29:07.268319] upgrade/ftl_sb_v5.c: 408:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - nvc: 00:30:56.851 [2024-12-06 18:29:07.268331] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x0 ver:5 blk_offs:0x0 blk_sz:0x20 00:30:56.851 [2024-12-06 18:29:07.268348] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x2 ver:0 blk_offs:0x20 blk_sz:0x5000 00:30:56.851 [2024-12-06 18:29:07.268358] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x3 ver:2 blk_offs:0x5020 blk_sz:0x80 00:30:56.851 [2024-12-06 18:29:07.268368] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x4 ver:2 blk_offs:0x50a0 blk_sz:0x80 00:30:56.851 [2024-12-06 18:29:07.268379] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xa ver:2 blk_offs:0x5120 blk_sz:0x800 00:30:56.851 [2024-12-06 18:29:07.268389] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xb ver:2 blk_offs:0x5920 blk_sz:0x800 00:30:56.851 [2024-12-06 18:29:07.268400] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xc ver:2 blk_offs:0x6120 blk_sz:0x800 00:30:56.851 [2024-12-06 18:29:07.268410] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xd ver:2 blk_offs:0x6920 blk_sz:0x800 00:30:56.851 [2024-12-06 18:29:07.268420] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xe ver:0 blk_offs:0x7120 blk_sz:0x40 00:30:56.851 [2024-12-06 18:29:07.268431] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xf ver:0 blk_offs:0x7160 blk_sz:0x40 00:30:56.851 [2024-12-06 18:29:07.268441] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x10 ver:1 blk_offs:0x71a0 blk_sz:0x20 00:30:56.851 [2024-12-06 18:29:07.268451] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x11 ver:1 blk_offs:0x71c0 blk_sz:0x20 00:30:56.851 [2024-12-06 18:29:07.268461] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x6 ver:2 blk_offs:0x71e0 blk_sz:0x20 00:30:56.851 [2024-12-06 18:29:07.268471] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x7 ver:2 blk_offs:0x7200 blk_sz:0x20 00:30:56.851 [2024-12-06 18:29:07.268482] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x7220 blk_sz:0x13c0e0 00:30:56.851 [2024-12-06 18:29:07.268491] upgrade/ftl_sb_v5.c: 422:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - base dev: 00:30:56.851 [2024-12-06 18:29:07.268502] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x1 ver:5 blk_offs:0x0 blk_sz:0x20 00:30:56.851 [2024-12-06 18:29:07.268513] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x20 blk_sz:0x20 00:30:56.851 [2024-12-06 18:29:07.268523] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x9 ver:0 blk_offs:0x40 blk_sz:0x1900000 00:30:56.851 [2024-12-06 18:29:07.268533] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x5 ver:0 blk_offs:0x1900040 blk_sz:0x360 00:30:56.851 [2024-12-06 18:29:07.268543] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x19003a0 blk_sz:0x3fc60 00:30:56.851 [2024-12-06 18:29:07.268554] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:56.851 [2024-12-06 18:29:07.268563] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Layout upgrade 00:30:56.851 [2024-12-06 18:29:07.268573] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.816 ms 00:30:56.851 [2024-12-06 18:29:07.268583] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:56.851 [2024-12-06 18:29:07.308167] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:56.851 [2024-12-06 18:29:07.308214] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:30:56.851 [2024-12-06 18:29:07.308229] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 39.600 ms 00:30:56.851 [2024-12-06 18:29:07.308244] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:56.851 [2024-12-06 18:29:07.308348] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:56.851 [2024-12-06 18:29:07.308360] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize band addresses 00:30:56.851 [2024-12-06 18:29:07.308371] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.054 ms 00:30:56.851 [2024-12-06 18:29:07.308381] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:56.851 [2024-12-06 18:29:07.360694] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:56.851 [2024-12-06 18:29:07.360741] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:30:56.851 [2024-12-06 18:29:07.360757] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 52.321 ms 00:30:56.851 [2024-12-06 18:29:07.360768] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:56.851 [2024-12-06 18:29:07.360819] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:56.851 [2024-12-06 18:29:07.360831] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:30:56.851 [2024-12-06 18:29:07.360846] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.003 ms 00:30:56.851 [2024-12-06 18:29:07.360856] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:56.851 [2024-12-06 18:29:07.361362] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:56.851 [2024-12-06 18:29:07.361378] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:30:56.851 [2024-12-06 18:29:07.361389] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.430 ms 00:30:56.851 [2024-12-06 18:29:07.361399] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:56.851 [2024-12-06 18:29:07.361517] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:56.851 [2024-12-06 18:29:07.361531] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:30:56.851 [2024-12-06 18:29:07.361548] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.095 ms 00:30:56.851 [2024-12-06 18:29:07.361558] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:56.851 [2024-12-06 18:29:07.381021] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:56.851 [2024-12-06 18:29:07.381066] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:30:56.851 [2024-12-06 18:29:07.381081] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 19.472 ms 00:30:56.851 [2024-12-06 18:29:07.381092] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:56.851 [2024-12-06 18:29:07.400001] ftl_nv_cache.c:1772:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: full chunks = 3, empty chunks = 1 00:30:56.851 [2024-12-06 18:29:07.400165] ftl_nv_cache.c:1776:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: state loaded successfully 00:30:56.851 [2024-12-06 18:29:07.400186] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:56.851 [2024-12-06 18:29:07.400197] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore NV cache metadata 00:30:56.851 [2024-12-06 18:29:07.400210] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 19.003 ms 00:30:56.851 [2024-12-06 18:29:07.400220] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:57.133 [2024-12-06 18:29:07.429883] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:57.133 [2024-12-06 18:29:07.429928] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore valid map metadata 00:30:57.133 [2024-12-06 18:29:07.429943] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 29.582 ms 00:30:57.133 [2024-12-06 18:29:07.429953] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:57.133 [2024-12-06 18:29:07.448565] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:57.133 [2024-12-06 18:29:07.448608] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore band info metadata 00:30:57.133 [2024-12-06 18:29:07.448622] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 18.596 ms 00:30:57.133 [2024-12-06 18:29:07.448632] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:57.133 [2024-12-06 18:29:07.466408] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:57.133 [2024-12-06 18:29:07.466443] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore trim metadata 00:30:57.133 [2024-12-06 18:29:07.466456] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 17.762 ms 00:30:57.133 [2024-12-06 18:29:07.466466] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:57.133 [2024-12-06 18:29:07.467294] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:57.133 [2024-12-06 18:29:07.467315] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize P2L checkpointing 00:30:57.133 [2024-12-06 18:29:07.467330] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.711 ms 00:30:57.133 [2024-12-06 18:29:07.467340] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:57.133 [2024-12-06 18:29:07.564038] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:57.133 [2024-12-06 18:29:07.564096] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore P2L checkpoints 00:30:57.133 [2024-12-06 18:29:07.564120] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 96.832 ms 00:30:57.133 [2024-12-06 18:29:07.564131] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:57.133 [2024-12-06 18:29:07.575708] ftl_l2p_cache.c: 458:ftl_l2p_cache_init: *NOTICE*: l2p maximum resident size is: 9 (of 10) MiB 00:30:57.133 [2024-12-06 18:29:07.578892] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:57.133 [2024-12-06 18:29:07.578933] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize L2P 00:30:57.133 [2024-12-06 18:29:07.578948] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 14.722 ms 00:30:57.133 [2024-12-06 18:29:07.578959] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:57.133 [2024-12-06 18:29:07.579059] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:57.133 [2024-12-06 18:29:07.579073] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore L2P 00:30:57.133 [2024-12-06 18:29:07.579088] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.007 ms 00:30:57.133 [2024-12-06 18:29:07.579099] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:57.133 [2024-12-06 18:29:07.579978] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:57.133 [2024-12-06 18:29:07.580002] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize band initialization 00:30:57.133 [2024-12-06 18:29:07.580014] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.816 ms 00:30:57.133 [2024-12-06 18:29:07.580024] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:57.133 [2024-12-06 18:29:07.580051] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:57.133 [2024-12-06 18:29:07.580062] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Start core poller 00:30:57.133 [2024-12-06 18:29:07.580072] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.004 ms 00:30:57.133 [2024-12-06 18:29:07.580083] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:57.133 [2024-12-06 18:29:07.580120] mngt/ftl_mngt_self_test.c: 208:ftl_mngt_self_test: *NOTICE*: [FTL][ftl0] Self test skipped 00:30:57.133 [2024-12-06 18:29:07.580133] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:57.133 [2024-12-06 18:29:07.580144] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Self test on startup 00:30:57.133 [2024-12-06 18:29:07.580154] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.014 ms 00:30:57.133 [2024-12-06 18:29:07.580164] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:57.133 [2024-12-06 18:29:07.616653] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:57.133 [2024-12-06 18:29:07.616715] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL dirty state 00:30:57.133 [2024-12-06 18:29:07.616737] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 36.527 ms 00:30:57.133 [2024-12-06 18:29:07.616748] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:57.133 [2024-12-06 18:29:07.616826] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:57.133 [2024-12-06 18:29:07.616839] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize initialization 00:30:57.133 [2024-12-06 18:29:07.616850] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.033 ms 00:30:57.133 [2024-12-06 18:29:07.616860] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:57.133 [2024-12-06 18:29:07.619461] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL startup', duration = 386.207 ms, result 0 00:30:58.513  [2024-12-06T18:29:10.027Z] Copying: 2436/1048576 [kB] (2436 kBps) [2024-12-06T18:29:10.961Z] Copying: 6148/1048576 [kB] (3712 kBps) [2024-12-06T18:29:11.893Z] Copying: 15596/1048576 [kB] (9448 kBps) [2024-12-06T18:29:13.268Z] Copying: 51/1024 [MB] (36 MBps) [2024-12-06T18:29:13.835Z] Copying: 87/1024 [MB] (35 MBps) [2024-12-06T18:29:15.223Z] Copying: 122/1024 [MB] (34 MBps) [2024-12-06T18:29:15.848Z] Copying: 157/1024 [MB] (35 MBps) [2024-12-06T18:29:17.227Z] Copying: 192/1024 [MB] (35 MBps) [2024-12-06T18:29:18.166Z] Copying: 229/1024 [MB] (36 MBps) [2024-12-06T18:29:19.104Z] Copying: 265/1024 [MB] (36 MBps) [2024-12-06T18:29:20.042Z] Copying: 300/1024 [MB] (34 MBps) [2024-12-06T18:29:20.978Z] Copying: 336/1024 [MB] (35 MBps) [2024-12-06T18:29:21.913Z] Copying: 371/1024 [MB] (35 MBps) [2024-12-06T18:29:22.847Z] Copying: 406/1024 [MB] (35 MBps) [2024-12-06T18:29:23.852Z] Copying: 440/1024 [MB] (34 MBps) [2024-12-06T18:29:25.230Z] Copying: 473/1024 [MB] (32 MBps) [2024-12-06T18:29:26.161Z] Copying: 507/1024 [MB] (33 MBps) [2024-12-06T18:29:27.094Z] Copying: 540/1024 [MB] (33 MBps) [2024-12-06T18:29:28.036Z] Copying: 574/1024 [MB] (33 MBps) [2024-12-06T18:29:28.971Z] Copying: 608/1024 [MB] (33 MBps) [2024-12-06T18:29:29.907Z] Copying: 642/1024 [MB] (34 MBps) [2024-12-06T18:29:30.844Z] Copying: 678/1024 [MB] (36 MBps) [2024-12-06T18:29:32.215Z] Copying: 714/1024 [MB] (35 MBps) [2024-12-06T18:29:33.151Z] Copying: 749/1024 [MB] (34 MBps) [2024-12-06T18:29:34.086Z] Copying: 785/1024 [MB] (35 MBps) [2024-12-06T18:29:35.026Z] Copying: 820/1024 [MB] (35 MBps) [2024-12-06T18:29:35.971Z] Copying: 855/1024 [MB] (34 MBps) [2024-12-06T18:29:36.906Z] Copying: 889/1024 [MB] (34 MBps) [2024-12-06T18:29:37.841Z] Copying: 923/1024 [MB] (34 MBps) [2024-12-06T18:29:39.218Z] Copying: 957/1024 [MB] (33 MBps) [2024-12-06T18:29:39.788Z] Copying: 991/1024 [MB] (33 MBps) [2024-12-06T18:29:39.788Z] Copying: 1024/1024 [MB] (average 32 MBps)[2024-12-06 18:29:39.769395] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:31:29.212 [2024-12-06 18:29:39.769444] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinit core IO channel 00:31:29.212 [2024-12-06 18:29:39.769462] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.003 ms 00:31:29.212 [2024-12-06 18:29:39.769473] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:29.212 [2024-12-06 18:29:39.769498] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:31:29.212 [2024-12-06 18:29:39.773961] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:31:29.212 [2024-12-06 18:29:39.774000] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Unregister IO device 00:31:29.212 [2024-12-06 18:29:39.774014] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.449 ms 00:31:29.212 [2024-12-06 18:29:39.774024] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:29.212 [2024-12-06 18:29:39.774227] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:31:29.212 [2024-12-06 18:29:39.774247] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Stop core poller 00:31:29.212 [2024-12-06 18:29:39.774258] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.176 ms 00:31:29.212 [2024-12-06 18:29:39.774278] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:29.472 [2024-12-06 18:29:39.792638] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:31:29.473 [2024-12-06 18:29:39.792683] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist L2P 00:31:29.473 [2024-12-06 18:29:39.792699] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 18.371 ms 00:31:29.473 [2024-12-06 18:29:39.792709] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:29.473 [2024-12-06 18:29:39.797697] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:31:29.473 [2024-12-06 18:29:39.797733] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finish L2P trims 00:31:29.473 [2024-12-06 18:29:39.797752] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.966 ms 00:31:29.473 [2024-12-06 18:29:39.797762] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:29.473 [2024-12-06 18:29:39.833908] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:31:29.473 [2024-12-06 18:29:39.833953] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist NV cache metadata 00:31:29.473 [2024-12-06 18:29:39.833967] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 36.128 ms 00:31:29.473 [2024-12-06 18:29:39.833977] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:29.473 [2024-12-06 18:29:39.854319] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:31:29.473 [2024-12-06 18:29:39.854383] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist valid map metadata 00:31:29.473 [2024-12-06 18:29:39.854398] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 20.336 ms 00:31:29.473 [2024-12-06 18:29:39.854409] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:29.473 [2024-12-06 18:29:39.856376] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:31:29.473 [2024-12-06 18:29:39.856412] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist P2L metadata 00:31:29.473 [2024-12-06 18:29:39.856425] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.928 ms 00:31:29.473 [2024-12-06 18:29:39.856441] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:29.473 [2024-12-06 18:29:39.892417] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:31:29.473 [2024-12-06 18:29:39.892456] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist band info metadata 00:31:29.473 [2024-12-06 18:29:39.892469] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 36.018 ms 00:31:29.473 [2024-12-06 18:29:39.892480] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:29.473 [2024-12-06 18:29:39.928495] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:31:29.473 [2024-12-06 18:29:39.928533] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist trim metadata 00:31:29.473 [2024-12-06 18:29:39.928546] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 36.037 ms 00:31:29.473 [2024-12-06 18:29:39.928555] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:29.473 [2024-12-06 18:29:39.964514] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:31:29.473 [2024-12-06 18:29:39.964563] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist superblock 00:31:29.473 [2024-12-06 18:29:39.964577] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 35.980 ms 00:31:29.473 [2024-12-06 18:29:39.964587] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:29.473 [2024-12-06 18:29:39.999595] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:31:29.473 [2024-12-06 18:29:39.999633] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL clean state 00:31:29.473 [2024-12-06 18:29:39.999661] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 34.990 ms 00:31:29.473 [2024-12-06 18:29:39.999671] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:29.473 [2024-12-06 18:29:39.999707] ftl_debug.c: 165:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Bands validity: 00:31:29.473 [2024-12-06 18:29:39.999722] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 1: 261120 / 261120 wr_cnt: 1 state: closed 00:31:29.473 [2024-12-06 18:29:39.999735] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 2: 1536 / 261120 wr_cnt: 1 state: open 00:31:29.473 [2024-12-06 18:29:39.999746] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 3: 0 / 261120 wr_cnt: 0 state: free 00:31:29.473 [2024-12-06 18:29:39.999757] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 4: 0 / 261120 wr_cnt: 0 state: free 00:31:29.473 [2024-12-06 18:29:39.999768] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 5: 0 / 261120 wr_cnt: 0 state: free 00:31:29.473 [2024-12-06 18:29:39.999778] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 6: 0 / 261120 wr_cnt: 0 state: free 00:31:29.473 [2024-12-06 18:29:39.999789] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 7: 0 / 261120 wr_cnt: 0 state: free 00:31:29.473 [2024-12-06 18:29:39.999799] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 8: 0 / 261120 wr_cnt: 0 state: free 00:31:29.473 [2024-12-06 18:29:39.999809] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 9: 0 / 261120 wr_cnt: 0 state: free 00:31:29.473 [2024-12-06 18:29:39.999819] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 10: 0 / 261120 wr_cnt: 0 state: free 00:31:29.473 [2024-12-06 18:29:39.999830] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 11: 0 / 261120 wr_cnt: 0 state: free 00:31:29.473 [2024-12-06 18:29:39.999840] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 12: 0 / 261120 wr_cnt: 0 state: free 00:31:29.473 [2024-12-06 18:29:39.999850] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 13: 0 / 261120 wr_cnt: 0 state: free 00:31:29.473 [2024-12-06 18:29:39.999860] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 14: 0 / 261120 wr_cnt: 0 state: free 00:31:29.473 [2024-12-06 18:29:39.999870] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 15: 0 / 261120 wr_cnt: 0 state: free 00:31:29.473 [2024-12-06 18:29:39.999880] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 16: 0 / 261120 wr_cnt: 0 state: free 00:31:29.473 [2024-12-06 18:29:39.999891] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 17: 0 / 261120 wr_cnt: 0 state: free 00:31:29.473 [2024-12-06 18:29:39.999901] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 18: 0 / 261120 wr_cnt: 0 state: free 00:31:29.473 [2024-12-06 18:29:39.999911] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 19: 0 / 261120 wr_cnt: 0 state: free 00:31:29.473 [2024-12-06 18:29:39.999921] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 20: 0 / 261120 wr_cnt: 0 state: free 00:31:29.473 [2024-12-06 18:29:39.999931] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 21: 0 / 261120 wr_cnt: 0 state: free 00:31:29.473 [2024-12-06 18:29:39.999941] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 22: 0 / 261120 wr_cnt: 0 state: free 00:31:29.473 [2024-12-06 18:29:39.999952] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 23: 0 / 261120 wr_cnt: 0 state: free 00:31:29.473 [2024-12-06 18:29:39.999962] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 24: 0 / 261120 wr_cnt: 0 state: free 00:31:29.473 [2024-12-06 18:29:39.999972] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 25: 0 / 261120 wr_cnt: 0 state: free 00:31:29.473 [2024-12-06 18:29:39.999982] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 26: 0 / 261120 wr_cnt: 0 state: free 00:31:29.473 [2024-12-06 18:29:39.999993] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 27: 0 / 261120 wr_cnt: 0 state: free 00:31:29.473 [2024-12-06 18:29:40.000003] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 28: 0 / 261120 wr_cnt: 0 state: free 00:31:29.473 [2024-12-06 18:29:40.000014] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 29: 0 / 261120 wr_cnt: 0 state: free 00:31:29.473 [2024-12-06 18:29:40.000024] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 30: 0 / 261120 wr_cnt: 0 state: free 00:31:29.473 [2024-12-06 18:29:40.000036] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 31: 0 / 261120 wr_cnt: 0 state: free 00:31:29.473 [2024-12-06 18:29:40.000047] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 32: 0 / 261120 wr_cnt: 0 state: free 00:31:29.473 [2024-12-06 18:29:40.000057] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 33: 0 / 261120 wr_cnt: 0 state: free 00:31:29.473 [2024-12-06 18:29:40.000068] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 34: 0 / 261120 wr_cnt: 0 state: free 00:31:29.473 [2024-12-06 18:29:40.000078] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 35: 0 / 261120 wr_cnt: 0 state: free 00:31:29.473 [2024-12-06 18:29:40.000089] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 36: 0 / 261120 wr_cnt: 0 state: free 00:31:29.473 [2024-12-06 18:29:40.000100] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 37: 0 / 261120 wr_cnt: 0 state: free 00:31:29.473 [2024-12-06 18:29:40.000110] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 38: 0 / 261120 wr_cnt: 0 state: free 00:31:29.473 [2024-12-06 18:29:40.000120] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 39: 0 / 261120 wr_cnt: 0 state: free 00:31:29.473 [2024-12-06 18:29:40.000131] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 40: 0 / 261120 wr_cnt: 0 state: free 00:31:29.473 [2024-12-06 18:29:40.000141] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 41: 0 / 261120 wr_cnt: 0 state: free 00:31:29.473 [2024-12-06 18:29:40.000152] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 42: 0 / 261120 wr_cnt: 0 state: free 00:31:29.473 [2024-12-06 18:29:40.000162] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 43: 0 / 261120 wr_cnt: 0 state: free 00:31:29.473 [2024-12-06 18:29:40.000172] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 44: 0 / 261120 wr_cnt: 0 state: free 00:31:29.473 [2024-12-06 18:29:40.000182] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 45: 0 / 261120 wr_cnt: 0 state: free 00:31:29.473 [2024-12-06 18:29:40.000192] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 46: 0 / 261120 wr_cnt: 0 state: free 00:31:29.474 [2024-12-06 18:29:40.000202] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 47: 0 / 261120 wr_cnt: 0 state: free 00:31:29.474 [2024-12-06 18:29:40.000213] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 48: 0 / 261120 wr_cnt: 0 state: free 00:31:29.474 [2024-12-06 18:29:40.000223] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 49: 0 / 261120 wr_cnt: 0 state: free 00:31:29.474 [2024-12-06 18:29:40.000233] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 50: 0 / 261120 wr_cnt: 0 state: free 00:31:29.474 [2024-12-06 18:29:40.000243] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 51: 0 / 261120 wr_cnt: 0 state: free 00:31:29.474 [2024-12-06 18:29:40.000253] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 52: 0 / 261120 wr_cnt: 0 state: free 00:31:29.474 [2024-12-06 18:29:40.000275] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 53: 0 / 261120 wr_cnt: 0 state: free 00:31:29.474 [2024-12-06 18:29:40.000287] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 54: 0 / 261120 wr_cnt: 0 state: free 00:31:29.474 [2024-12-06 18:29:40.000297] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 55: 0 / 261120 wr_cnt: 0 state: free 00:31:29.474 [2024-12-06 18:29:40.000308] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 56: 0 / 261120 wr_cnt: 0 state: free 00:31:29.474 [2024-12-06 18:29:40.000318] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 57: 0 / 261120 wr_cnt: 0 state: free 00:31:29.474 [2024-12-06 18:29:40.000328] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 58: 0 / 261120 wr_cnt: 0 state: free 00:31:29.474 [2024-12-06 18:29:40.000339] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 59: 0 / 261120 wr_cnt: 0 state: free 00:31:29.474 [2024-12-06 18:29:40.000350] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 60: 0 / 261120 wr_cnt: 0 state: free 00:31:29.474 [2024-12-06 18:29:40.000360] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 61: 0 / 261120 wr_cnt: 0 state: free 00:31:29.474 [2024-12-06 18:29:40.000371] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 62: 0 / 261120 wr_cnt: 0 state: free 00:31:29.474 [2024-12-06 18:29:40.000382] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 63: 0 / 261120 wr_cnt: 0 state: free 00:31:29.474 [2024-12-06 18:29:40.000393] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 64: 0 / 261120 wr_cnt: 0 state: free 00:31:29.474 [2024-12-06 18:29:40.000404] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 65: 0 / 261120 wr_cnt: 0 state: free 00:31:29.474 [2024-12-06 18:29:40.000414] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 66: 0 / 261120 wr_cnt: 0 state: free 00:31:29.474 [2024-12-06 18:29:40.000425] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 67: 0 / 261120 wr_cnt: 0 state: free 00:31:29.474 [2024-12-06 18:29:40.000435] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 68: 0 / 261120 wr_cnt: 0 state: free 00:31:29.474 [2024-12-06 18:29:40.000445] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 69: 0 / 261120 wr_cnt: 0 state: free 00:31:29.474 [2024-12-06 18:29:40.000456] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 70: 0 / 261120 wr_cnt: 0 state: free 00:31:29.474 [2024-12-06 18:29:40.000467] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 71: 0 / 261120 wr_cnt: 0 state: free 00:31:29.474 [2024-12-06 18:29:40.000477] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 72: 0 / 261120 wr_cnt: 0 state: free 00:31:29.474 [2024-12-06 18:29:40.000487] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 73: 0 / 261120 wr_cnt: 0 state: free 00:31:29.474 [2024-12-06 18:29:40.000497] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 74: 0 / 261120 wr_cnt: 0 state: free 00:31:29.474 [2024-12-06 18:29:40.000507] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 75: 0 / 261120 wr_cnt: 0 state: free 00:31:29.474 [2024-12-06 18:29:40.000518] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 76: 0 / 261120 wr_cnt: 0 state: free 00:31:29.474 [2024-12-06 18:29:40.000529] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 77: 0 / 261120 wr_cnt: 0 state: free 00:31:29.474 [2024-12-06 18:29:40.000539] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 78: 0 / 261120 wr_cnt: 0 state: free 00:31:29.474 [2024-12-06 18:29:40.000549] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 79: 0 / 261120 wr_cnt: 0 state: free 00:31:29.474 [2024-12-06 18:29:40.000559] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 80: 0 / 261120 wr_cnt: 0 state: free 00:31:29.474 [2024-12-06 18:29:40.000576] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 81: 0 / 261120 wr_cnt: 0 state: free 00:31:29.474 [2024-12-06 18:29:40.000586] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 82: 0 / 261120 wr_cnt: 0 state: free 00:31:29.474 [2024-12-06 18:29:40.000596] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 83: 0 / 261120 wr_cnt: 0 state: free 00:31:29.474 [2024-12-06 18:29:40.000606] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 84: 0 / 261120 wr_cnt: 0 state: free 00:31:29.474 [2024-12-06 18:29:40.000617] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 85: 0 / 261120 wr_cnt: 0 state: free 00:31:29.474 [2024-12-06 18:29:40.000627] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 86: 0 / 261120 wr_cnt: 0 state: free 00:31:29.474 [2024-12-06 18:29:40.000637] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 87: 0 / 261120 wr_cnt: 0 state: free 00:31:29.474 [2024-12-06 18:29:40.000647] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 88: 0 / 261120 wr_cnt: 0 state: free 00:31:29.474 [2024-12-06 18:29:40.000658] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 89: 0 / 261120 wr_cnt: 0 state: free 00:31:29.474 [2024-12-06 18:29:40.000668] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 90: 0 / 261120 wr_cnt: 0 state: free 00:31:29.474 [2024-12-06 18:29:40.000678] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 91: 0 / 261120 wr_cnt: 0 state: free 00:31:29.474 [2024-12-06 18:29:40.000688] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 92: 0 / 261120 wr_cnt: 0 state: free 00:31:29.474 [2024-12-06 18:29:40.000699] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 93: 0 / 261120 wr_cnt: 0 state: free 00:31:29.474 [2024-12-06 18:29:40.000709] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 94: 0 / 261120 wr_cnt: 0 state: free 00:31:29.474 [2024-12-06 18:29:40.000724] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 95: 0 / 261120 wr_cnt: 0 state: free 00:31:29.474 [2024-12-06 18:29:40.000735] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 96: 0 / 261120 wr_cnt: 0 state: free 00:31:29.474 [2024-12-06 18:29:40.000745] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 97: 0 / 261120 wr_cnt: 0 state: free 00:31:29.474 [2024-12-06 18:29:40.000756] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 98: 0 / 261120 wr_cnt: 0 state: free 00:31:29.474 [2024-12-06 18:29:40.000766] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 99: 0 / 261120 wr_cnt: 0 state: free 00:31:29.474 [2024-12-06 18:29:40.000777] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 100: 0 / 261120 wr_cnt: 0 state: free 00:31:29.474 [2024-12-06 18:29:40.000794] ftl_debug.c: 211:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] 00:31:29.474 [2024-12-06 18:29:40.000804] ftl_debug.c: 212:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] device UUID: 0fd165a4-a7ae-4b12-8280-ae23d7c38836 00:31:29.474 [2024-12-06 18:29:40.000815] ftl_debug.c: 213:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total valid LBAs: 262656 00:31:29.474 [2024-12-06 18:29:40.000824] ftl_debug.c: 214:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total writes: 263616 00:31:29.474 [2024-12-06 18:29:40.000838] ftl_debug.c: 215:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] user writes: 261632 00:31:29.474 [2024-12-06 18:29:40.000848] ftl_debug.c: 216:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] WAF: 1.0076 00:31:29.474 [2024-12-06 18:29:40.000858] ftl_debug.c: 218:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] limits: 00:31:29.474 [2024-12-06 18:29:40.000878] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] crit: 0 00:31:29.474 [2024-12-06 18:29:40.000888] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] high: 0 00:31:29.474 [2024-12-06 18:29:40.000896] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] low: 0 00:31:29.474 [2024-12-06 18:29:40.000905] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] start: 0 00:31:29.474 [2024-12-06 18:29:40.000915] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:31:29.474 [2024-12-06 18:29:40.000926] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Dump statistics 00:31:29.474 [2024-12-06 18:29:40.000936] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.211 ms 00:31:29.474 [2024-12-06 18:29:40.000946] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:29.474 [2024-12-06 18:29:40.020501] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:31:29.474 [2024-12-06 18:29:40.020535] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize L2P 00:31:29.474 [2024-12-06 18:29:40.020548] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 19.552 ms 00:31:29.474 [2024-12-06 18:29:40.020558] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:29.474 [2024-12-06 18:29:40.021090] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:31:29.474 [2024-12-06 18:29:40.021106] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize P2L checkpointing 00:31:29.474 [2024-12-06 18:29:40.021118] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.498 ms 00:31:29.474 [2024-12-06 18:29:40.021127] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:29.734 [2024-12-06 18:29:40.072225] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:31:29.734 [2024-12-06 18:29:40.072262] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:31:29.734 [2024-12-06 18:29:40.072282] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:31:29.734 [2024-12-06 18:29:40.072293] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:29.734 [2024-12-06 18:29:40.072358] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:31:29.734 [2024-12-06 18:29:40.072369] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:31:29.734 [2024-12-06 18:29:40.072379] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:31:29.734 [2024-12-06 18:29:40.072389] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:29.734 [2024-12-06 18:29:40.072456] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:31:29.734 [2024-12-06 18:29:40.072469] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:31:29.734 [2024-12-06 18:29:40.072479] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:31:29.734 [2024-12-06 18:29:40.072489] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:29.734 [2024-12-06 18:29:40.072506] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:31:29.734 [2024-12-06 18:29:40.072516] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:31:29.734 [2024-12-06 18:29:40.072526] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:31:29.734 [2024-12-06 18:29:40.072535] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:29.734 [2024-12-06 18:29:40.195100] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:31:29.734 [2024-12-06 18:29:40.195174] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:31:29.734 [2024-12-06 18:29:40.195190] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:31:29.734 [2024-12-06 18:29:40.195200] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:29.734 [2024-12-06 18:29:40.296807] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:31:29.734 [2024-12-06 18:29:40.296857] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:31:29.734 [2024-12-06 18:29:40.296872] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:31:29.734 [2024-12-06 18:29:40.296882] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:29.734 [2024-12-06 18:29:40.296995] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:31:29.734 [2024-12-06 18:29:40.297011] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:31:29.734 [2024-12-06 18:29:40.297021] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:31:29.734 [2024-12-06 18:29:40.297031] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:29.734 [2024-12-06 18:29:40.297076] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:31:29.734 [2024-12-06 18:29:40.297087] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:31:29.734 [2024-12-06 18:29:40.297097] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:31:29.734 [2024-12-06 18:29:40.297107] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:29.734 [2024-12-06 18:29:40.297211] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:31:29.734 [2024-12-06 18:29:40.297224] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:31:29.734 [2024-12-06 18:29:40.297239] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:31:29.734 [2024-12-06 18:29:40.297249] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:29.734 [2024-12-06 18:29:40.297311] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:31:29.734 [2024-12-06 18:29:40.297325] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize superblock 00:31:29.734 [2024-12-06 18:29:40.297337] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:31:29.734 [2024-12-06 18:29:40.297346] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:29.734 [2024-12-06 18:29:40.297384] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:31:29.734 [2024-12-06 18:29:40.297395] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:31:29.734 [2024-12-06 18:29:40.297409] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:31:29.734 [2024-12-06 18:29:40.297419] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:29.734 [2024-12-06 18:29:40.297462] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:31:29.734 [2024-12-06 18:29:40.297474] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:31:29.734 [2024-12-06 18:29:40.297484] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:31:29.734 [2024-12-06 18:29:40.297493] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:29.734 [2024-12-06 18:29:40.297610] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL shutdown', duration = 529.049 ms, result 0 00:31:31.115 00:31:31.115 00:31:31.115 18:29:41 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@94 -- # md5sum -c /home/vagrant/spdk_repo/spdk/test/ftl/testfile.md5 00:31:33.022 /home/vagrant/spdk_repo/spdk/test/ftl/testfile: OK 00:31:33.022 18:29:43 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@95 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=ftl0 --of=/home/vagrant/spdk_repo/spdk/test/ftl/testfile2 --count=262144 --skip=262144 --json=/home/vagrant/spdk_repo/spdk/test/ftl/config/ftl.json 00:31:33.022 [2024-12-06 18:29:43.148192] Starting SPDK v25.01-pre git sha1 1148849d6 / DPDK 24.03.0 initialization... 00:31:33.022 [2024-12-06 18:29:43.148343] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid82699 ] 00:31:33.022 [2024-12-06 18:29:43.329220] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:31:33.022 [2024-12-06 18:29:43.443577] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:31:33.281 [2024-12-06 18:29:43.804738] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:31:33.281 [2024-12-06 18:29:43.804815] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:31:33.542 [2024-12-06 18:29:43.966031] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:31:33.542 [2024-12-06 18:29:43.966101] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Check configuration 00:31:33.542 [2024-12-06 18:29:43.966118] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.004 ms 00:31:33.542 [2024-12-06 18:29:43.966129] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:33.542 [2024-12-06 18:29:43.966181] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:31:33.542 [2024-12-06 18:29:43.966196] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:31:33.542 [2024-12-06 18:29:43.966207] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.031 ms 00:31:33.542 [2024-12-06 18:29:43.966217] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:33.542 [2024-12-06 18:29:43.966239] mngt/ftl_mngt_bdev.c: 196:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using nvc0n1p0 as write buffer cache 00:31:33.542 [2024-12-06 18:29:43.967201] mngt/ftl_mngt_bdev.c: 236:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using bdev as NV Cache device 00:31:33.542 [2024-12-06 18:29:43.967231] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:31:33.542 [2024-12-06 18:29:43.967242] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:31:33.542 [2024-12-06 18:29:43.967253] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.997 ms 00:31:33.542 [2024-12-06 18:29:43.967280] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:33.542 [2024-12-06 18:29:43.968689] mngt/ftl_mngt_md.c: 455:ftl_mngt_load_sb: *NOTICE*: [FTL][ftl0] SHM: clean 0, shm_clean 0 00:31:33.542 [2024-12-06 18:29:43.987030] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:31:33.542 [2024-12-06 18:29:43.987070] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Load super block 00:31:33.542 [2024-12-06 18:29:43.987085] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 18.371 ms 00:31:33.542 [2024-12-06 18:29:43.987095] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:33.542 [2024-12-06 18:29:43.987162] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:31:33.542 [2024-12-06 18:29:43.987175] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Validate super block 00:31:33.542 [2024-12-06 18:29:43.987186] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.023 ms 00:31:33.542 [2024-12-06 18:29:43.987196] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:33.542 [2024-12-06 18:29:43.993760] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:31:33.542 [2024-12-06 18:29:43.993790] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:31:33.542 [2024-12-06 18:29:43.993802] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 6.486 ms 00:31:33.542 [2024-12-06 18:29:43.993831] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:33.542 [2024-12-06 18:29:43.993907] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:31:33.542 [2024-12-06 18:29:43.993921] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:31:33.542 [2024-12-06 18:29:43.993931] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.057 ms 00:31:33.542 [2024-12-06 18:29:43.993941] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:33.542 [2024-12-06 18:29:43.993980] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:31:33.542 [2024-12-06 18:29:43.993992] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Register IO device 00:31:33.542 [2024-12-06 18:29:43.994003] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.007 ms 00:31:33.542 [2024-12-06 18:29:43.994014] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:33.542 [2024-12-06 18:29:43.994041] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:31:33.542 [2024-12-06 18:29:43.998988] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:31:33.542 [2024-12-06 18:29:43.999023] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:31:33.542 [2024-12-06 18:29:43.999038] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.960 ms 00:31:33.542 [2024-12-06 18:29:43.999048] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:33.542 [2024-12-06 18:29:43.999081] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:31:33.542 [2024-12-06 18:29:43.999093] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Decorate bands 00:31:33.542 [2024-12-06 18:29:43.999103] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.009 ms 00:31:33.542 [2024-12-06 18:29:43.999113] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:33.542 [2024-12-06 18:29:43.999166] ftl_layout.c: 613:ftl_layout_setup: *NOTICE*: [FTL][ftl0] FTL layout setup mode 0 00:31:33.542 [2024-12-06 18:29:43.999191] upgrade/ftl_sb_v5.c: 278:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob load 0x150 bytes 00:31:33.542 [2024-12-06 18:29:43.999225] upgrade/ftl_sb_v5.c: 287:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] base layout blob load 0x48 bytes 00:31:33.542 [2024-12-06 18:29:43.999246] upgrade/ftl_sb_v5.c: 294:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] layout blob load 0x190 bytes 00:31:33.542 [2024-12-06 18:29:43.999345] upgrade/ftl_sb_v5.c: 92:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob store 0x150 bytes 00:31:33.542 [2024-12-06 18:29:43.999359] upgrade/ftl_sb_v5.c: 101:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] base layout blob store 0x48 bytes 00:31:33.542 [2024-12-06 18:29:43.999372] upgrade/ftl_sb_v5.c: 109:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] layout blob store 0x190 bytes 00:31:33.542 [2024-12-06 18:29:43.999384] ftl_layout.c: 685:ftl_layout_setup: *NOTICE*: [FTL][ftl0] Base device capacity: 103424.00 MiB 00:31:33.542 [2024-12-06 18:29:43.999396] ftl_layout.c: 687:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache device capacity: 5171.00 MiB 00:31:33.542 [2024-12-06 18:29:43.999407] ftl_layout.c: 689:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P entries: 20971520 00:31:33.542 [2024-12-06 18:29:43.999417] ftl_layout.c: 690:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P address size: 4 00:31:33.542 [2024-12-06 18:29:43.999430] ftl_layout.c: 691:ftl_layout_setup: *NOTICE*: [FTL][ftl0] P2L checkpoint pages: 2048 00:31:33.542 [2024-12-06 18:29:43.999439] ftl_layout.c: 692:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache chunk count 5 00:31:33.542 [2024-12-06 18:29:43.999450] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:31:33.542 [2024-12-06 18:29:43.999460] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize layout 00:31:33.542 [2024-12-06 18:29:43.999470] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.287 ms 00:31:33.542 [2024-12-06 18:29:43.999480] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:33.542 [2024-12-06 18:29:43.999551] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:31:33.542 [2024-12-06 18:29:43.999562] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Verify layout 00:31:33.542 [2024-12-06 18:29:43.999571] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.055 ms 00:31:33.542 [2024-12-06 18:29:43.999581] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:33.542 [2024-12-06 18:29:43.999676] ftl_layout.c: 768:ftl_layout_dump: *NOTICE*: [FTL][ftl0] NV cache layout: 00:31:33.542 [2024-12-06 18:29:43.999690] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb 00:31:33.542 [2024-12-06 18:29:43.999701] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:31:33.542 [2024-12-06 18:29:43.999711] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:31:33.542 [2024-12-06 18:29:43.999722] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region l2p 00:31:33.542 [2024-12-06 18:29:43.999731] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.12 MiB 00:31:33.542 [2024-12-06 18:29:43.999741] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 80.00 MiB 00:31:33.542 [2024-12-06 18:29:43.999751] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md 00:31:33.542 [2024-12-06 18:29:43.999761] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.12 MiB 00:31:33.542 [2024-12-06 18:29:43.999770] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:31:33.542 [2024-12-06 18:29:43.999780] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md_mirror 00:31:33.542 [2024-12-06 18:29:43.999789] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.62 MiB 00:31:33.542 [2024-12-06 18:29:43.999799] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:31:33.542 [2024-12-06 18:29:43.999817] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md 00:31:33.542 [2024-12-06 18:29:43.999828] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.88 MiB 00:31:33.543 [2024-12-06 18:29:43.999837] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:31:33.543 [2024-12-06 18:29:43.999846] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md_mirror 00:31:33.543 [2024-12-06 18:29:43.999856] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 114.00 MiB 00:31:33.543 [2024-12-06 18:29:43.999865] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:31:33.543 [2024-12-06 18:29:43.999874] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l0 00:31:33.543 [2024-12-06 18:29:43.999884] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 81.12 MiB 00:31:33.543 [2024-12-06 18:29:43.999893] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:31:33.543 [2024-12-06 18:29:43.999903] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l1 00:31:33.543 [2024-12-06 18:29:43.999912] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 89.12 MiB 00:31:33.543 [2024-12-06 18:29:43.999921] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:31:33.543 [2024-12-06 18:29:43.999930] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l2 00:31:33.543 [2024-12-06 18:29:43.999940] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 97.12 MiB 00:31:33.543 [2024-12-06 18:29:43.999949] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:31:33.543 [2024-12-06 18:29:43.999958] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l3 00:31:33.543 [2024-12-06 18:29:43.999967] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 105.12 MiB 00:31:33.543 [2024-12-06 18:29:43.999976] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:31:33.543 [2024-12-06 18:29:43.999985] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md 00:31:33.543 [2024-12-06 18:29:43.999994] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.12 MiB 00:31:33.543 [2024-12-06 18:29:44.000003] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:31:33.543 [2024-12-06 18:29:44.000012] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md_mirror 00:31:33.543 [2024-12-06 18:29:44.000022] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.38 MiB 00:31:33.543 [2024-12-06 18:29:44.000030] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:31:33.543 [2024-12-06 18:29:44.000039] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log 00:31:33.543 [2024-12-06 18:29:44.000048] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.62 MiB 00:31:33.543 [2024-12-06 18:29:44.000058] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:31:33.543 [2024-12-06 18:29:44.000067] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log_mirror 00:31:33.543 [2024-12-06 18:29:44.000076] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.75 MiB 00:31:33.543 [2024-12-06 18:29:44.000085] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:31:33.543 [2024-12-06 18:29:44.000094] ftl_layout.c: 775:ftl_layout_dump: *NOTICE*: [FTL][ftl0] Base device layout: 00:31:33.543 [2024-12-06 18:29:44.000104] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb_mirror 00:31:33.543 [2024-12-06 18:29:44.000113] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:31:33.543 [2024-12-06 18:29:44.000123] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:31:33.543 [2024-12-06 18:29:44.000133] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region vmap 00:31:33.543 [2024-12-06 18:29:44.000142] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 102400.25 MiB 00:31:33.543 [2024-12-06 18:29:44.000152] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 3.38 MiB 00:31:33.543 [2024-12-06 18:29:44.000161] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region data_btm 00:31:33.543 [2024-12-06 18:29:44.000170] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.25 MiB 00:31:33.543 [2024-12-06 18:29:44.000180] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 102400.00 MiB 00:31:33.543 [2024-12-06 18:29:44.000190] upgrade/ftl_sb_v5.c: 408:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - nvc: 00:31:33.543 [2024-12-06 18:29:44.000202] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x0 ver:5 blk_offs:0x0 blk_sz:0x20 00:31:33.543 [2024-12-06 18:29:44.000217] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x2 ver:0 blk_offs:0x20 blk_sz:0x5000 00:31:33.543 [2024-12-06 18:29:44.000228] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x3 ver:2 blk_offs:0x5020 blk_sz:0x80 00:31:33.543 [2024-12-06 18:29:44.000239] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x4 ver:2 blk_offs:0x50a0 blk_sz:0x80 00:31:33.543 [2024-12-06 18:29:44.000250] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xa ver:2 blk_offs:0x5120 blk_sz:0x800 00:31:33.543 [2024-12-06 18:29:44.000260] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xb ver:2 blk_offs:0x5920 blk_sz:0x800 00:31:33.543 [2024-12-06 18:29:44.000280] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xc ver:2 blk_offs:0x6120 blk_sz:0x800 00:31:33.543 [2024-12-06 18:29:44.000290] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xd ver:2 blk_offs:0x6920 blk_sz:0x800 00:31:33.543 [2024-12-06 18:29:44.000301] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xe ver:0 blk_offs:0x7120 blk_sz:0x40 00:31:33.543 [2024-12-06 18:29:44.000311] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xf ver:0 blk_offs:0x7160 blk_sz:0x40 00:31:33.543 [2024-12-06 18:29:44.000321] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x10 ver:1 blk_offs:0x71a0 blk_sz:0x20 00:31:33.543 [2024-12-06 18:29:44.000331] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x11 ver:1 blk_offs:0x71c0 blk_sz:0x20 00:31:33.543 [2024-12-06 18:29:44.000342] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x6 ver:2 blk_offs:0x71e0 blk_sz:0x20 00:31:33.543 [2024-12-06 18:29:44.000352] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x7 ver:2 blk_offs:0x7200 blk_sz:0x20 00:31:33.543 [2024-12-06 18:29:44.000363] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x7220 blk_sz:0x13c0e0 00:31:33.543 [2024-12-06 18:29:44.000373] upgrade/ftl_sb_v5.c: 422:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - base dev: 00:31:33.543 [2024-12-06 18:29:44.000384] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x1 ver:5 blk_offs:0x0 blk_sz:0x20 00:31:33.543 [2024-12-06 18:29:44.000395] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x20 blk_sz:0x20 00:31:33.543 [2024-12-06 18:29:44.000405] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x9 ver:0 blk_offs:0x40 blk_sz:0x1900000 00:31:33.543 [2024-12-06 18:29:44.000416] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x5 ver:0 blk_offs:0x1900040 blk_sz:0x360 00:31:33.543 [2024-12-06 18:29:44.000426] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x19003a0 blk_sz:0x3fc60 00:31:33.543 [2024-12-06 18:29:44.000436] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:31:33.543 [2024-12-06 18:29:44.000446] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Layout upgrade 00:31:33.543 [2024-12-06 18:29:44.000456] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.816 ms 00:31:33.543 [2024-12-06 18:29:44.000466] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:33.543 [2024-12-06 18:29:44.039020] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:31:33.543 [2024-12-06 18:29:44.039061] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:31:33.543 [2024-12-06 18:29:44.039075] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 38.569 ms 00:31:33.543 [2024-12-06 18:29:44.039091] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:33.543 [2024-12-06 18:29:44.039167] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:31:33.543 [2024-12-06 18:29:44.039179] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize band addresses 00:31:33.543 [2024-12-06 18:29:44.039189] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.051 ms 00:31:33.543 [2024-12-06 18:29:44.039200] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:33.543 [2024-12-06 18:29:44.098353] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:31:33.543 [2024-12-06 18:29:44.098398] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:31:33.543 [2024-12-06 18:29:44.098413] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 59.193 ms 00:31:33.543 [2024-12-06 18:29:44.098423] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:33.543 [2024-12-06 18:29:44.098457] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:31:33.543 [2024-12-06 18:29:44.098469] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:31:33.543 [2024-12-06 18:29:44.098484] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.003 ms 00:31:33.543 [2024-12-06 18:29:44.098494] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:33.543 [2024-12-06 18:29:44.098958] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:31:33.543 [2024-12-06 18:29:44.098979] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:31:33.543 [2024-12-06 18:29:44.098990] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.401 ms 00:31:33.543 [2024-12-06 18:29:44.099000] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:33.543 [2024-12-06 18:29:44.099116] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:31:33.543 [2024-12-06 18:29:44.099129] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:31:33.543 [2024-12-06 18:29:44.099146] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.095 ms 00:31:33.543 [2024-12-06 18:29:44.099156] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:33.820 [2024-12-06 18:29:44.118363] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:31:33.820 [2024-12-06 18:29:44.118409] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:31:33.820 [2024-12-06 18:29:44.118423] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 19.217 ms 00:31:33.820 [2024-12-06 18:29:44.118433] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:33.820 [2024-12-06 18:29:44.137523] ftl_nv_cache.c:1772:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: full chunks = 2, empty chunks = 2 00:31:33.820 [2024-12-06 18:29:44.137562] ftl_nv_cache.c:1776:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: state loaded successfully 00:31:33.820 [2024-12-06 18:29:44.137576] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:31:33.820 [2024-12-06 18:29:44.137587] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore NV cache metadata 00:31:33.820 [2024-12-06 18:29:44.137614] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 19.075 ms 00:31:33.820 [2024-12-06 18:29:44.137625] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:33.820 [2024-12-06 18:29:44.167019] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:31:33.820 [2024-12-06 18:29:44.167060] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore valid map metadata 00:31:33.820 [2024-12-06 18:29:44.167090] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 29.400 ms 00:31:33.820 [2024-12-06 18:29:44.167100] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:33.820 [2024-12-06 18:29:44.185383] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:31:33.820 [2024-12-06 18:29:44.185420] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore band info metadata 00:31:33.820 [2024-12-06 18:29:44.185433] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 18.258 ms 00:31:33.820 [2024-12-06 18:29:44.185442] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:33.820 [2024-12-06 18:29:44.203175] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:31:33.820 [2024-12-06 18:29:44.203212] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore trim metadata 00:31:33.820 [2024-12-06 18:29:44.203225] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 17.709 ms 00:31:33.820 [2024-12-06 18:29:44.203235] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:33.820 [2024-12-06 18:29:44.203971] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:31:33.820 [2024-12-06 18:29:44.203998] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize P2L checkpointing 00:31:33.820 [2024-12-06 18:29:44.204013] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.622 ms 00:31:33.820 [2024-12-06 18:29:44.204023] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:33.820 [2024-12-06 18:29:44.289457] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:31:33.820 [2024-12-06 18:29:44.289520] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore P2L checkpoints 00:31:33.820 [2024-12-06 18:29:44.289542] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 85.550 ms 00:31:33.820 [2024-12-06 18:29:44.289569] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:33.820 [2024-12-06 18:29:44.300492] ftl_l2p_cache.c: 458:ftl_l2p_cache_init: *NOTICE*: l2p maximum resident size is: 9 (of 10) MiB 00:31:33.820 [2024-12-06 18:29:44.302965] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:31:33.820 [2024-12-06 18:29:44.303000] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize L2P 00:31:33.820 [2024-12-06 18:29:44.303013] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 13.338 ms 00:31:33.820 [2024-12-06 18:29:44.303024] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:33.820 [2024-12-06 18:29:44.303118] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:31:33.820 [2024-12-06 18:29:44.303131] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore L2P 00:31:33.821 [2024-12-06 18:29:44.303146] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.006 ms 00:31:33.821 [2024-12-06 18:29:44.303156] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:33.821 [2024-12-06 18:29:44.304003] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:31:33.821 [2024-12-06 18:29:44.304025] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize band initialization 00:31:33.821 [2024-12-06 18:29:44.304036] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.808 ms 00:31:33.821 [2024-12-06 18:29:44.304046] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:33.821 [2024-12-06 18:29:44.304092] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:31:33.821 [2024-12-06 18:29:44.304104] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Start core poller 00:31:33.821 [2024-12-06 18:29:44.304113] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.004 ms 00:31:33.821 [2024-12-06 18:29:44.304124] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:33.821 [2024-12-06 18:29:44.304161] mngt/ftl_mngt_self_test.c: 208:ftl_mngt_self_test: *NOTICE*: [FTL][ftl0] Self test skipped 00:31:33.821 [2024-12-06 18:29:44.304174] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:31:33.821 [2024-12-06 18:29:44.304184] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Self test on startup 00:31:33.821 [2024-12-06 18:29:44.304195] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.013 ms 00:31:33.821 [2024-12-06 18:29:44.304205] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:33.821 [2024-12-06 18:29:44.340782] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:31:33.821 [2024-12-06 18:29:44.340830] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL dirty state 00:31:33.821 [2024-12-06 18:29:44.340850] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 36.616 ms 00:31:33.821 [2024-12-06 18:29:44.340861] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:33.821 [2024-12-06 18:29:44.340937] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:31:33.821 [2024-12-06 18:29:44.340950] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize initialization 00:31:33.821 [2024-12-06 18:29:44.340961] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.034 ms 00:31:33.821 [2024-12-06 18:29:44.340971] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:33.821 [2024-12-06 18:29:44.342180] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL startup', duration = 376.333 ms, result 0 00:31:35.217  [2024-12-06T18:29:46.731Z] Copying: 27/1024 [MB] (27 MBps) [2024-12-06T18:29:47.669Z] Copying: 54/1024 [MB] (27 MBps) [2024-12-06T18:29:48.607Z] Copying: 81/1024 [MB] (26 MBps) [2024-12-06T18:29:49.985Z] Copying: 107/1024 [MB] (25 MBps) [2024-12-06T18:29:50.555Z] Copying: 134/1024 [MB] (27 MBps) [2024-12-06T18:29:51.931Z] Copying: 161/1024 [MB] (26 MBps) [2024-12-06T18:29:52.882Z] Copying: 189/1024 [MB] (27 MBps) [2024-12-06T18:29:53.818Z] Copying: 216/1024 [MB] (27 MBps) [2024-12-06T18:29:54.756Z] Copying: 243/1024 [MB] (27 MBps) [2024-12-06T18:29:55.691Z] Copying: 271/1024 [MB] (27 MBps) [2024-12-06T18:29:56.627Z] Copying: 299/1024 [MB] (27 MBps) [2024-12-06T18:29:57.565Z] Copying: 325/1024 [MB] (26 MBps) [2024-12-06T18:29:58.948Z] Copying: 353/1024 [MB] (27 MBps) [2024-12-06T18:29:59.885Z] Copying: 381/1024 [MB] (28 MBps) [2024-12-06T18:30:00.825Z] Copying: 411/1024 [MB] (29 MBps) [2024-12-06T18:30:01.796Z] Copying: 439/1024 [MB] (28 MBps) [2024-12-06T18:30:02.732Z] Copying: 468/1024 [MB] (28 MBps) [2024-12-06T18:30:03.669Z] Copying: 497/1024 [MB] (29 MBps) [2024-12-06T18:30:04.607Z] Copying: 526/1024 [MB] (29 MBps) [2024-12-06T18:30:05.541Z] Copying: 555/1024 [MB] (28 MBps) [2024-12-06T18:30:06.913Z] Copying: 582/1024 [MB] (27 MBps) [2024-12-06T18:30:07.848Z] Copying: 610/1024 [MB] (27 MBps) [2024-12-06T18:30:08.782Z] Copying: 638/1024 [MB] (28 MBps) [2024-12-06T18:30:09.731Z] Copying: 665/1024 [MB] (26 MBps) [2024-12-06T18:30:10.685Z] Copying: 693/1024 [MB] (27 MBps) [2024-12-06T18:30:11.618Z] Copying: 721/1024 [MB] (28 MBps) [2024-12-06T18:30:12.555Z] Copying: 750/1024 [MB] (28 MBps) [2024-12-06T18:30:13.935Z] Copying: 777/1024 [MB] (27 MBps) [2024-12-06T18:30:14.874Z] Copying: 806/1024 [MB] (28 MBps) [2024-12-06T18:30:15.810Z] Copying: 831/1024 [MB] (25 MBps) [2024-12-06T18:30:16.744Z] Copying: 858/1024 [MB] (26 MBps) [2024-12-06T18:30:17.679Z] Copying: 885/1024 [MB] (26 MBps) [2024-12-06T18:30:18.629Z] Copying: 912/1024 [MB] (26 MBps) [2024-12-06T18:30:19.564Z] Copying: 939/1024 [MB] (27 MBps) [2024-12-06T18:30:20.941Z] Copying: 966/1024 [MB] (26 MBps) [2024-12-06T18:30:21.509Z] Copying: 992/1024 [MB] (26 MBps) [2024-12-06T18:30:21.767Z] Copying: 1018/1024 [MB] (25 MBps) [2024-12-06T18:30:22.026Z] Copying: 1024/1024 [MB] (average 27 MBps)[2024-12-06 18:30:22.016777] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:32:11.450 [2024-12-06 18:30:22.017184] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinit core IO channel 00:32:11.450 [2024-12-06 18:30:22.017908] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.005 ms 00:32:11.450 [2024-12-06 18:30:22.017948] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:11.450 [2024-12-06 18:30:22.018024] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:32:11.709 [2024-12-06 18:30:22.027233] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:32:11.709 [2024-12-06 18:30:22.027313] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Unregister IO device 00:32:11.709 [2024-12-06 18:30:22.027336] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 9.185 ms 00:32:11.709 [2024-12-06 18:30:22.027355] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:11.709 [2024-12-06 18:30:22.027741] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:32:11.709 [2024-12-06 18:30:22.027771] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Stop core poller 00:32:11.709 [2024-12-06 18:30:22.027791] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.333 ms 00:32:11.709 [2024-12-06 18:30:22.027809] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:11.709 [2024-12-06 18:30:22.033212] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:32:11.709 [2024-12-06 18:30:22.033244] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist L2P 00:32:11.709 [2024-12-06 18:30:22.033257] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 5.372 ms 00:32:11.709 [2024-12-06 18:30:22.033285] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:11.709 [2024-12-06 18:30:22.040144] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:32:11.709 [2024-12-06 18:30:22.040185] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finish L2P trims 00:32:11.709 [2024-12-06 18:30:22.040206] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 6.838 ms 00:32:11.709 [2024-12-06 18:30:22.040219] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:11.709 [2024-12-06 18:30:22.078364] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:32:11.709 [2024-12-06 18:30:22.078428] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist NV cache metadata 00:32:11.709 [2024-12-06 18:30:22.078443] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 38.123 ms 00:32:11.709 [2024-12-06 18:30:22.078454] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:11.709 [2024-12-06 18:30:22.101464] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:32:11.709 [2024-12-06 18:30:22.101512] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist valid map metadata 00:32:11.709 [2024-12-06 18:30:22.101543] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 23.006 ms 00:32:11.710 [2024-12-06 18:30:22.101554] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:11.710 [2024-12-06 18:30:22.103954] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:32:11.710 [2024-12-06 18:30:22.103991] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist P2L metadata 00:32:11.710 [2024-12-06 18:30:22.104004] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.355 ms 00:32:11.710 [2024-12-06 18:30:22.104015] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:11.710 [2024-12-06 18:30:22.142035] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:32:11.710 [2024-12-06 18:30:22.142079] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist band info metadata 00:32:11.710 [2024-12-06 18:30:22.142093] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 38.065 ms 00:32:11.710 [2024-12-06 18:30:22.142103] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:11.710 [2024-12-06 18:30:22.179711] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:32:11.710 [2024-12-06 18:30:22.179750] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist trim metadata 00:32:11.710 [2024-12-06 18:30:22.179764] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 37.630 ms 00:32:11.710 [2024-12-06 18:30:22.179774] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:11.710 [2024-12-06 18:30:22.214396] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:32:11.710 [2024-12-06 18:30:22.214452] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist superblock 00:32:11.710 [2024-12-06 18:30:22.214472] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 34.640 ms 00:32:11.710 [2024-12-06 18:30:22.214487] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:11.710 [2024-12-06 18:30:22.250120] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:32:11.710 [2024-12-06 18:30:22.250160] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL clean state 00:32:11.710 [2024-12-06 18:30:22.250174] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 35.594 ms 00:32:11.710 [2024-12-06 18:30:22.250184] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:11.710 [2024-12-06 18:30:22.250221] ftl_debug.c: 165:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Bands validity: 00:32:11.710 [2024-12-06 18:30:22.250243] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 1: 261120 / 261120 wr_cnt: 1 state: closed 00:32:11.710 [2024-12-06 18:30:22.250260] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 2: 1536 / 261120 wr_cnt: 1 state: open 00:32:11.710 [2024-12-06 18:30:22.250285] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 3: 0 / 261120 wr_cnt: 0 state: free 00:32:11.710 [2024-12-06 18:30:22.250296] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 4: 0 / 261120 wr_cnt: 0 state: free 00:32:11.710 [2024-12-06 18:30:22.250307] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 5: 0 / 261120 wr_cnt: 0 state: free 00:32:11.710 [2024-12-06 18:30:22.250317] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 6: 0 / 261120 wr_cnt: 0 state: free 00:32:11.710 [2024-12-06 18:30:22.250328] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 7: 0 / 261120 wr_cnt: 0 state: free 00:32:11.710 [2024-12-06 18:30:22.250340] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 8: 0 / 261120 wr_cnt: 0 state: free 00:32:11.710 [2024-12-06 18:30:22.250350] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 9: 0 / 261120 wr_cnt: 0 state: free 00:32:11.710 [2024-12-06 18:30:22.250361] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 10: 0 / 261120 wr_cnt: 0 state: free 00:32:11.710 [2024-12-06 18:30:22.250371] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 11: 0 / 261120 wr_cnt: 0 state: free 00:32:11.710 [2024-12-06 18:30:22.250389] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 12: 0 / 261120 wr_cnt: 0 state: free 00:32:11.710 [2024-12-06 18:30:22.250400] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 13: 0 / 261120 wr_cnt: 0 state: free 00:32:11.710 [2024-12-06 18:30:22.250410] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 14: 0 / 261120 wr_cnt: 0 state: free 00:32:11.710 [2024-12-06 18:30:22.250421] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 15: 0 / 261120 wr_cnt: 0 state: free 00:32:11.710 [2024-12-06 18:30:22.250431] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 16: 0 / 261120 wr_cnt: 0 state: free 00:32:11.710 [2024-12-06 18:30:22.250442] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 17: 0 / 261120 wr_cnt: 0 state: free 00:32:11.710 [2024-12-06 18:30:22.250452] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 18: 0 / 261120 wr_cnt: 0 state: free 00:32:11.710 [2024-12-06 18:30:22.250463] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 19: 0 / 261120 wr_cnt: 0 state: free 00:32:11.710 [2024-12-06 18:30:22.250474] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 20: 0 / 261120 wr_cnt: 0 state: free 00:32:11.710 [2024-12-06 18:30:22.250484] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 21: 0 / 261120 wr_cnt: 0 state: free 00:32:11.710 [2024-12-06 18:30:22.250494] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 22: 0 / 261120 wr_cnt: 0 state: free 00:32:11.710 [2024-12-06 18:30:22.250505] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 23: 0 / 261120 wr_cnt: 0 state: free 00:32:11.710 [2024-12-06 18:30:22.250515] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 24: 0 / 261120 wr_cnt: 0 state: free 00:32:11.710 [2024-12-06 18:30:22.250526] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 25: 0 / 261120 wr_cnt: 0 state: free 00:32:11.710 [2024-12-06 18:30:22.250536] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 26: 0 / 261120 wr_cnt: 0 state: free 00:32:11.710 [2024-12-06 18:30:22.250546] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 27: 0 / 261120 wr_cnt: 0 state: free 00:32:11.710 [2024-12-06 18:30:22.250557] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 28: 0 / 261120 wr_cnt: 0 state: free 00:32:11.710 [2024-12-06 18:30:22.250567] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 29: 0 / 261120 wr_cnt: 0 state: free 00:32:11.710 [2024-12-06 18:30:22.250578] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 30: 0 / 261120 wr_cnt: 0 state: free 00:32:11.710 [2024-12-06 18:30:22.250589] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 31: 0 / 261120 wr_cnt: 0 state: free 00:32:11.710 [2024-12-06 18:30:22.250600] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 32: 0 / 261120 wr_cnt: 0 state: free 00:32:11.710 [2024-12-06 18:30:22.250610] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 33: 0 / 261120 wr_cnt: 0 state: free 00:32:11.710 [2024-12-06 18:30:22.250622] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 34: 0 / 261120 wr_cnt: 0 state: free 00:32:11.710 [2024-12-06 18:30:22.250632] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 35: 0 / 261120 wr_cnt: 0 state: free 00:32:11.710 [2024-12-06 18:30:22.250643] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 36: 0 / 261120 wr_cnt: 0 state: free 00:32:11.710 [2024-12-06 18:30:22.250654] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 37: 0 / 261120 wr_cnt: 0 state: free 00:32:11.710 [2024-12-06 18:30:22.250665] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 38: 0 / 261120 wr_cnt: 0 state: free 00:32:11.710 [2024-12-06 18:30:22.250675] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 39: 0 / 261120 wr_cnt: 0 state: free 00:32:11.710 [2024-12-06 18:30:22.250686] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 40: 0 / 261120 wr_cnt: 0 state: free 00:32:11.710 [2024-12-06 18:30:22.250696] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 41: 0 / 261120 wr_cnt: 0 state: free 00:32:11.710 [2024-12-06 18:30:22.250707] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 42: 0 / 261120 wr_cnt: 0 state: free 00:32:11.710 [2024-12-06 18:30:22.250717] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 43: 0 / 261120 wr_cnt: 0 state: free 00:32:11.710 [2024-12-06 18:30:22.250727] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 44: 0 / 261120 wr_cnt: 0 state: free 00:32:11.710 [2024-12-06 18:30:22.250738] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 45: 0 / 261120 wr_cnt: 0 state: free 00:32:11.710 [2024-12-06 18:30:22.250748] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 46: 0 / 261120 wr_cnt: 0 state: free 00:32:11.710 [2024-12-06 18:30:22.250759] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 47: 0 / 261120 wr_cnt: 0 state: free 00:32:11.710 [2024-12-06 18:30:22.250769] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 48: 0 / 261120 wr_cnt: 0 state: free 00:32:11.710 [2024-12-06 18:30:22.250780] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 49: 0 / 261120 wr_cnt: 0 state: free 00:32:11.710 [2024-12-06 18:30:22.250790] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 50: 0 / 261120 wr_cnt: 0 state: free 00:32:11.710 [2024-12-06 18:30:22.250801] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 51: 0 / 261120 wr_cnt: 0 state: free 00:32:11.710 [2024-12-06 18:30:22.250811] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 52: 0 / 261120 wr_cnt: 0 state: free 00:32:11.710 [2024-12-06 18:30:22.250821] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 53: 0 / 261120 wr_cnt: 0 state: free 00:32:11.710 [2024-12-06 18:30:22.250832] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 54: 0 / 261120 wr_cnt: 0 state: free 00:32:11.710 [2024-12-06 18:30:22.250842] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 55: 0 / 261120 wr_cnt: 0 state: free 00:32:11.710 [2024-12-06 18:30:22.250853] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 56: 0 / 261120 wr_cnt: 0 state: free 00:32:11.710 [2024-12-06 18:30:22.250863] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 57: 0 / 261120 wr_cnt: 0 state: free 00:32:11.710 [2024-12-06 18:30:22.250873] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 58: 0 / 261120 wr_cnt: 0 state: free 00:32:11.710 [2024-12-06 18:30:22.250883] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 59: 0 / 261120 wr_cnt: 0 state: free 00:32:11.710 [2024-12-06 18:30:22.250893] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 60: 0 / 261120 wr_cnt: 0 state: free 00:32:11.710 [2024-12-06 18:30:22.250904] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 61: 0 / 261120 wr_cnt: 0 state: free 00:32:11.710 [2024-12-06 18:30:22.250914] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 62: 0 / 261120 wr_cnt: 0 state: free 00:32:11.710 [2024-12-06 18:30:22.250926] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 63: 0 / 261120 wr_cnt: 0 state: free 00:32:11.710 [2024-12-06 18:30:22.250936] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 64: 0 / 261120 wr_cnt: 0 state: free 00:32:11.710 [2024-12-06 18:30:22.250947] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 65: 0 / 261120 wr_cnt: 0 state: free 00:32:11.710 [2024-12-06 18:30:22.250958] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 66: 0 / 261120 wr_cnt: 0 state: free 00:32:11.710 [2024-12-06 18:30:22.250969] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 67: 0 / 261120 wr_cnt: 0 state: free 00:32:11.710 [2024-12-06 18:30:22.250980] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 68: 0 / 261120 wr_cnt: 0 state: free 00:32:11.710 [2024-12-06 18:30:22.250991] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 69: 0 / 261120 wr_cnt: 0 state: free 00:32:11.710 [2024-12-06 18:30:22.251001] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 70: 0 / 261120 wr_cnt: 0 state: free 00:32:11.710 [2024-12-06 18:30:22.251011] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 71: 0 / 261120 wr_cnt: 0 state: free 00:32:11.711 [2024-12-06 18:30:22.251022] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 72: 0 / 261120 wr_cnt: 0 state: free 00:32:11.711 [2024-12-06 18:30:22.251032] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 73: 0 / 261120 wr_cnt: 0 state: free 00:32:11.711 [2024-12-06 18:30:22.251042] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 74: 0 / 261120 wr_cnt: 0 state: free 00:32:11.711 [2024-12-06 18:30:22.251053] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 75: 0 / 261120 wr_cnt: 0 state: free 00:32:11.711 [2024-12-06 18:30:22.251068] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 76: 0 / 261120 wr_cnt: 0 state: free 00:32:11.711 [2024-12-06 18:30:22.251079] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 77: 0 / 261120 wr_cnt: 0 state: free 00:32:11.711 [2024-12-06 18:30:22.251089] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 78: 0 / 261120 wr_cnt: 0 state: free 00:32:11.711 [2024-12-06 18:30:22.251100] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 79: 0 / 261120 wr_cnt: 0 state: free 00:32:11.711 [2024-12-06 18:30:22.251111] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 80: 0 / 261120 wr_cnt: 0 state: free 00:32:11.711 [2024-12-06 18:30:22.251121] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 81: 0 / 261120 wr_cnt: 0 state: free 00:32:11.711 [2024-12-06 18:30:22.251132] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 82: 0 / 261120 wr_cnt: 0 state: free 00:32:11.711 [2024-12-06 18:30:22.251143] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 83: 0 / 261120 wr_cnt: 0 state: free 00:32:11.711 [2024-12-06 18:30:22.251153] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 84: 0 / 261120 wr_cnt: 0 state: free 00:32:11.711 [2024-12-06 18:30:22.251164] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 85: 0 / 261120 wr_cnt: 0 state: free 00:32:11.711 [2024-12-06 18:30:22.251174] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 86: 0 / 261120 wr_cnt: 0 state: free 00:32:11.711 [2024-12-06 18:30:22.251184] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 87: 0 / 261120 wr_cnt: 0 state: free 00:32:11.711 [2024-12-06 18:30:22.251195] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 88: 0 / 261120 wr_cnt: 0 state: free 00:32:11.711 [2024-12-06 18:30:22.251205] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 89: 0 / 261120 wr_cnt: 0 state: free 00:32:11.711 [2024-12-06 18:30:22.251215] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 90: 0 / 261120 wr_cnt: 0 state: free 00:32:11.711 [2024-12-06 18:30:22.251225] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 91: 0 / 261120 wr_cnt: 0 state: free 00:32:11.711 [2024-12-06 18:30:22.251236] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 92: 0 / 261120 wr_cnt: 0 state: free 00:32:11.711 [2024-12-06 18:30:22.251246] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 93: 0 / 261120 wr_cnt: 0 state: free 00:32:11.711 [2024-12-06 18:30:22.251256] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 94: 0 / 261120 wr_cnt: 0 state: free 00:32:11.711 [2024-12-06 18:30:22.251274] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 95: 0 / 261120 wr_cnt: 0 state: free 00:32:11.711 [2024-12-06 18:30:22.251285] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 96: 0 / 261120 wr_cnt: 0 state: free 00:32:11.711 [2024-12-06 18:30:22.251296] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 97: 0 / 261120 wr_cnt: 0 state: free 00:32:11.711 [2024-12-06 18:30:22.251307] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 98: 0 / 261120 wr_cnt: 0 state: free 00:32:11.711 [2024-12-06 18:30:22.251318] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 99: 0 / 261120 wr_cnt: 0 state: free 00:32:11.711 [2024-12-06 18:30:22.251328] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 100: 0 / 261120 wr_cnt: 0 state: free 00:32:11.711 [2024-12-06 18:30:22.251345] ftl_debug.c: 211:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] 00:32:11.711 [2024-12-06 18:30:22.251355] ftl_debug.c: 212:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] device UUID: 0fd165a4-a7ae-4b12-8280-ae23d7c38836 00:32:11.711 [2024-12-06 18:30:22.251366] ftl_debug.c: 213:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total valid LBAs: 262656 00:32:11.711 [2024-12-06 18:30:22.251375] ftl_debug.c: 214:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total writes: 960 00:32:11.711 [2024-12-06 18:30:22.251387] ftl_debug.c: 215:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] user writes: 0 00:32:11.711 [2024-12-06 18:30:22.251397] ftl_debug.c: 216:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] WAF: inf 00:32:11.711 [2024-12-06 18:30:22.251418] ftl_debug.c: 218:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] limits: 00:32:11.711 [2024-12-06 18:30:22.251428] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] crit: 0 00:32:11.711 [2024-12-06 18:30:22.251438] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] high: 0 00:32:11.711 [2024-12-06 18:30:22.251447] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] low: 0 00:32:11.711 [2024-12-06 18:30:22.251456] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] start: 0 00:32:11.711 [2024-12-06 18:30:22.251465] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:32:11.711 [2024-12-06 18:30:22.251476] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Dump statistics 00:32:11.711 [2024-12-06 18:30:22.251486] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.247 ms 00:32:11.711 [2024-12-06 18:30:22.251500] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:11.711 [2024-12-06 18:30:22.270993] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:32:11.711 [2024-12-06 18:30:22.271028] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize L2P 00:32:11.711 [2024-12-06 18:30:22.271056] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 19.490 ms 00:32:11.711 [2024-12-06 18:30:22.271067] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:11.711 [2024-12-06 18:30:22.271608] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:32:11.711 [2024-12-06 18:30:22.271626] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize P2L checkpointing 00:32:11.711 [2024-12-06 18:30:22.271637] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.520 ms 00:32:11.711 [2024-12-06 18:30:22.271646] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:11.970 [2024-12-06 18:30:22.322965] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:32:11.970 [2024-12-06 18:30:22.323002] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:32:11.970 [2024-12-06 18:30:22.323014] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:32:11.970 [2024-12-06 18:30:22.323026] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:11.970 [2024-12-06 18:30:22.323079] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:32:11.970 [2024-12-06 18:30:22.323094] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:32:11.970 [2024-12-06 18:30:22.323104] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:32:11.970 [2024-12-06 18:30:22.323114] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:11.970 [2024-12-06 18:30:22.323173] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:32:11.970 [2024-12-06 18:30:22.323187] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:32:11.970 [2024-12-06 18:30:22.323197] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:32:11.970 [2024-12-06 18:30:22.323207] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:11.970 [2024-12-06 18:30:22.323223] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:32:11.970 [2024-12-06 18:30:22.323234] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:32:11.970 [2024-12-06 18:30:22.323249] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:32:11.970 [2024-12-06 18:30:22.323259] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:11.970 [2024-12-06 18:30:22.446759] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:32:11.970 [2024-12-06 18:30:22.446836] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:32:11.970 [2024-12-06 18:30:22.446850] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:32:11.970 [2024-12-06 18:30:22.446862] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:12.230 [2024-12-06 18:30:22.549685] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:32:12.230 [2024-12-06 18:30:22.549741] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:32:12.230 [2024-12-06 18:30:22.549755] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:32:12.230 [2024-12-06 18:30:22.549766] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:12.230 [2024-12-06 18:30:22.549865] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:32:12.230 [2024-12-06 18:30:22.549877] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:32:12.230 [2024-12-06 18:30:22.549888] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:32:12.230 [2024-12-06 18:30:22.549898] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:12.230 [2024-12-06 18:30:22.549946] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:32:12.230 [2024-12-06 18:30:22.549957] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:32:12.230 [2024-12-06 18:30:22.549967] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:32:12.230 [2024-12-06 18:30:22.549980] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:12.230 [2024-12-06 18:30:22.550095] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:32:12.230 [2024-12-06 18:30:22.550109] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:32:12.230 [2024-12-06 18:30:22.550120] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:32:12.230 [2024-12-06 18:30:22.550130] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:12.230 [2024-12-06 18:30:22.550163] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:32:12.230 [2024-12-06 18:30:22.550175] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize superblock 00:32:12.230 [2024-12-06 18:30:22.550186] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:32:12.230 [2024-12-06 18:30:22.550195] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:12.230 [2024-12-06 18:30:22.550236] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:32:12.230 [2024-12-06 18:30:22.550246] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:32:12.230 [2024-12-06 18:30:22.550256] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:32:12.230 [2024-12-06 18:30:22.550277] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:12.230 [2024-12-06 18:30:22.550319] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:32:12.230 [2024-12-06 18:30:22.550331] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:32:12.230 [2024-12-06 18:30:22.550342] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:32:12.230 [2024-12-06 18:30:22.550356] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:12.230 [2024-12-06 18:30:22.550478] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL shutdown', duration = 534.557 ms, result 0 00:32:13.167 00:32:13.167 00:32:13.167 18:30:23 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@96 -- # md5sum -c /home/vagrant/spdk_repo/spdk/test/ftl/testfile2.md5 00:32:15.073 /home/vagrant/spdk_repo/spdk/test/ftl/testfile2: OK 00:32:15.073 18:30:25 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@98 -- # trap - SIGINT SIGTERM EXIT 00:32:15.073 18:30:25 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@99 -- # restore_kill 00:32:15.073 18:30:25 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@31 -- # rm -f /home/vagrant/spdk_repo/spdk/test/ftl/config/ftl.json 00:32:15.073 18:30:25 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@32 -- # rm -f /home/vagrant/spdk_repo/spdk/test/ftl/testfile 00:32:15.073 18:30:25 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@33 -- # rm -f /home/vagrant/spdk_repo/spdk/test/ftl/testfile2 00:32:15.073 18:30:25 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@34 -- # rm -f /home/vagrant/spdk_repo/spdk/test/ftl/testfile.md5 00:32:15.073 18:30:25 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@35 -- # rm -f /home/vagrant/spdk_repo/spdk/test/ftl/testfile2.md5 00:32:15.332 18:30:25 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@37 -- # killprocess 80976 00:32:15.332 18:30:25 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@954 -- # '[' -z 80976 ']' 00:32:15.332 18:30:25 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@958 -- # kill -0 80976 00:32:15.332 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 958: kill: (80976) - No such process 00:32:15.332 Process with pid 80976 is not found 00:32:15.332 18:30:25 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@981 -- # echo 'Process with pid 80976 is not found' 00:32:15.332 18:30:25 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@38 -- # rmmod nbd 00:32:15.591 18:30:25 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@39 -- # remove_shm 00:32:15.591 Remove shared memory files 00:32:15.591 18:30:25 ftl.ftl_dirty_shutdown -- ftl/common.sh@204 -- # echo Remove shared memory files 00:32:15.591 18:30:25 ftl.ftl_dirty_shutdown -- ftl/common.sh@205 -- # rm -f rm -f 00:32:15.591 18:30:25 ftl.ftl_dirty_shutdown -- ftl/common.sh@206 -- # rm -f rm -f 00:32:15.591 18:30:25 ftl.ftl_dirty_shutdown -- ftl/common.sh@207 -- # rm -f rm -f 00:32:15.591 18:30:25 ftl.ftl_dirty_shutdown -- ftl/common.sh@208 -- # rm -f rm -f /dev/shm/iscsi 00:32:15.591 18:30:25 ftl.ftl_dirty_shutdown -- ftl/common.sh@209 -- # rm -f rm -f 00:32:15.591 00:32:15.591 real 3m26.959s 00:32:15.591 user 3m53.721s 00:32:15.591 sys 0m37.000s 00:32:15.591 18:30:25 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1130 -- # xtrace_disable 00:32:15.591 18:30:25 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@10 -- # set +x 00:32:15.591 ************************************ 00:32:15.591 END TEST ftl_dirty_shutdown 00:32:15.591 ************************************ 00:32:15.591 18:30:25 ftl -- ftl/ftl.sh@78 -- # run_test ftl_upgrade_shutdown /home/vagrant/spdk_repo/spdk/test/ftl/upgrade_shutdown.sh 0000:00:11.0 0000:00:10.0 00:32:15.591 18:30:25 ftl -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:32:15.591 18:30:25 ftl -- common/autotest_common.sh@1111 -- # xtrace_disable 00:32:15.591 18:30:25 ftl -- common/autotest_common.sh@10 -- # set +x 00:32:15.591 ************************************ 00:32:15.591 START TEST ftl_upgrade_shutdown 00:32:15.591 ************************************ 00:32:15.591 18:30:26 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/ftl/upgrade_shutdown.sh 0000:00:11.0 0000:00:10.0 00:32:15.591 * Looking for test storage... 00:32:15.591 * Found test storage at /home/vagrant/spdk_repo/spdk/test/ftl 00:32:15.591 18:30:26 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:32:15.591 18:30:26 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1711 -- # lcov --version 00:32:15.591 18:30:26 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:32:15.850 18:30:26 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:32:15.850 18:30:26 ftl.ftl_upgrade_shutdown -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:32:15.850 18:30:26 ftl.ftl_upgrade_shutdown -- scripts/common.sh@333 -- # local ver1 ver1_l 00:32:15.850 18:30:26 ftl.ftl_upgrade_shutdown -- scripts/common.sh@334 -- # local ver2 ver2_l 00:32:15.850 18:30:26 ftl.ftl_upgrade_shutdown -- scripts/common.sh@336 -- # IFS=.-: 00:32:15.851 18:30:26 ftl.ftl_upgrade_shutdown -- scripts/common.sh@336 -- # read -ra ver1 00:32:15.851 18:30:26 ftl.ftl_upgrade_shutdown -- scripts/common.sh@337 -- # IFS=.-: 00:32:15.851 18:30:26 ftl.ftl_upgrade_shutdown -- scripts/common.sh@337 -- # read -ra ver2 00:32:15.851 18:30:26 ftl.ftl_upgrade_shutdown -- scripts/common.sh@338 -- # local 'op=<' 00:32:15.851 18:30:26 ftl.ftl_upgrade_shutdown -- scripts/common.sh@340 -- # ver1_l=2 00:32:15.851 18:30:26 ftl.ftl_upgrade_shutdown -- scripts/common.sh@341 -- # ver2_l=1 00:32:15.851 18:30:26 ftl.ftl_upgrade_shutdown -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:32:15.851 18:30:26 ftl.ftl_upgrade_shutdown -- scripts/common.sh@344 -- # case "$op" in 00:32:15.851 18:30:26 ftl.ftl_upgrade_shutdown -- scripts/common.sh@345 -- # : 1 00:32:15.851 18:30:26 ftl.ftl_upgrade_shutdown -- scripts/common.sh@364 -- # (( v = 0 )) 00:32:15.851 18:30:26 ftl.ftl_upgrade_shutdown -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:32:15.851 18:30:26 ftl.ftl_upgrade_shutdown -- scripts/common.sh@365 -- # decimal 1 00:32:15.851 18:30:26 ftl.ftl_upgrade_shutdown -- scripts/common.sh@353 -- # local d=1 00:32:15.851 18:30:26 ftl.ftl_upgrade_shutdown -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:32:15.851 18:30:26 ftl.ftl_upgrade_shutdown -- scripts/common.sh@355 -- # echo 1 00:32:15.851 18:30:26 ftl.ftl_upgrade_shutdown -- scripts/common.sh@365 -- # ver1[v]=1 00:32:15.851 18:30:26 ftl.ftl_upgrade_shutdown -- scripts/common.sh@366 -- # decimal 2 00:32:15.851 18:30:26 ftl.ftl_upgrade_shutdown -- scripts/common.sh@353 -- # local d=2 00:32:15.851 18:30:26 ftl.ftl_upgrade_shutdown -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:32:15.851 18:30:26 ftl.ftl_upgrade_shutdown -- scripts/common.sh@355 -- # echo 2 00:32:15.851 18:30:26 ftl.ftl_upgrade_shutdown -- scripts/common.sh@366 -- # ver2[v]=2 00:32:15.851 18:30:26 ftl.ftl_upgrade_shutdown -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:32:15.851 18:30:26 ftl.ftl_upgrade_shutdown -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:32:15.851 18:30:26 ftl.ftl_upgrade_shutdown -- scripts/common.sh@368 -- # return 0 00:32:15.851 18:30:26 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:32:15.851 18:30:26 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:32:15.851 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:32:15.851 --rc genhtml_branch_coverage=1 00:32:15.851 --rc genhtml_function_coverage=1 00:32:15.851 --rc genhtml_legend=1 00:32:15.851 --rc geninfo_all_blocks=1 00:32:15.851 --rc geninfo_unexecuted_blocks=1 00:32:15.851 00:32:15.851 ' 00:32:15.851 18:30:26 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:32:15.851 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:32:15.851 --rc genhtml_branch_coverage=1 00:32:15.851 --rc genhtml_function_coverage=1 00:32:15.851 --rc genhtml_legend=1 00:32:15.851 --rc geninfo_all_blocks=1 00:32:15.851 --rc geninfo_unexecuted_blocks=1 00:32:15.851 00:32:15.851 ' 00:32:15.851 18:30:26 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:32:15.851 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:32:15.851 --rc genhtml_branch_coverage=1 00:32:15.851 --rc genhtml_function_coverage=1 00:32:15.851 --rc genhtml_legend=1 00:32:15.851 --rc geninfo_all_blocks=1 00:32:15.851 --rc geninfo_unexecuted_blocks=1 00:32:15.851 00:32:15.851 ' 00:32:15.851 18:30:26 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:32:15.851 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:32:15.851 --rc genhtml_branch_coverage=1 00:32:15.851 --rc genhtml_function_coverage=1 00:32:15.851 --rc genhtml_legend=1 00:32:15.851 --rc geninfo_all_blocks=1 00:32:15.851 --rc geninfo_unexecuted_blocks=1 00:32:15.851 00:32:15.851 ' 00:32:15.851 18:30:26 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@8 -- # source /home/vagrant/spdk_repo/spdk/test/ftl/common.sh 00:32:15.851 18:30:26 ftl.ftl_upgrade_shutdown -- ftl/common.sh@8 -- # dirname /home/vagrant/spdk_repo/spdk/test/ftl/upgrade_shutdown.sh 00:32:15.851 18:30:26 ftl.ftl_upgrade_shutdown -- ftl/common.sh@8 -- # readlink -f /home/vagrant/spdk_repo/spdk/test/ftl 00:32:15.851 18:30:26 ftl.ftl_upgrade_shutdown -- ftl/common.sh@8 -- # testdir=/home/vagrant/spdk_repo/spdk/test/ftl 00:32:15.851 18:30:26 ftl.ftl_upgrade_shutdown -- ftl/common.sh@9 -- # readlink -f /home/vagrant/spdk_repo/spdk/test/ftl/../.. 00:32:15.851 18:30:26 ftl.ftl_upgrade_shutdown -- ftl/common.sh@9 -- # rootdir=/home/vagrant/spdk_repo/spdk 00:32:15.851 18:30:26 ftl.ftl_upgrade_shutdown -- ftl/common.sh@10 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:32:15.851 18:30:26 ftl.ftl_upgrade_shutdown -- ftl/common.sh@12 -- # export 'ftl_tgt_core_mask=[0]' 00:32:15.851 18:30:26 ftl.ftl_upgrade_shutdown -- ftl/common.sh@12 -- # ftl_tgt_core_mask='[0]' 00:32:15.851 18:30:26 ftl.ftl_upgrade_shutdown -- ftl/common.sh@14 -- # export spdk_tgt_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:32:15.851 18:30:26 ftl.ftl_upgrade_shutdown -- ftl/common.sh@14 -- # spdk_tgt_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:32:15.851 18:30:26 ftl.ftl_upgrade_shutdown -- ftl/common.sh@15 -- # export 'spdk_tgt_cpumask=[0]' 00:32:15.851 18:30:26 ftl.ftl_upgrade_shutdown -- ftl/common.sh@15 -- # spdk_tgt_cpumask='[0]' 00:32:15.851 18:30:26 ftl.ftl_upgrade_shutdown -- ftl/common.sh@16 -- # export spdk_tgt_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json 00:32:15.851 18:30:26 ftl.ftl_upgrade_shutdown -- ftl/common.sh@16 -- # spdk_tgt_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json 00:32:15.851 18:30:26 ftl.ftl_upgrade_shutdown -- ftl/common.sh@17 -- # export spdk_tgt_pid= 00:32:15.851 18:30:26 ftl.ftl_upgrade_shutdown -- ftl/common.sh@17 -- # spdk_tgt_pid= 00:32:15.851 18:30:26 ftl.ftl_upgrade_shutdown -- ftl/common.sh@19 -- # export spdk_ini_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:32:15.851 18:30:26 ftl.ftl_upgrade_shutdown -- ftl/common.sh@19 -- # spdk_ini_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:32:15.851 18:30:26 ftl.ftl_upgrade_shutdown -- ftl/common.sh@20 -- # export 'spdk_ini_cpumask=[1]' 00:32:15.851 18:30:26 ftl.ftl_upgrade_shutdown -- ftl/common.sh@20 -- # spdk_ini_cpumask='[1]' 00:32:15.851 18:30:26 ftl.ftl_upgrade_shutdown -- ftl/common.sh@21 -- # export spdk_ini_rpc=/var/tmp/spdk.tgt.sock 00:32:15.851 18:30:26 ftl.ftl_upgrade_shutdown -- ftl/common.sh@21 -- # spdk_ini_rpc=/var/tmp/spdk.tgt.sock 00:32:15.851 18:30:26 ftl.ftl_upgrade_shutdown -- ftl/common.sh@22 -- # export spdk_ini_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json 00:32:15.851 18:30:26 ftl.ftl_upgrade_shutdown -- ftl/common.sh@22 -- # spdk_ini_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json 00:32:15.851 18:30:26 ftl.ftl_upgrade_shutdown -- ftl/common.sh@23 -- # export spdk_ini_pid= 00:32:15.851 18:30:26 ftl.ftl_upgrade_shutdown -- ftl/common.sh@23 -- # spdk_ini_pid= 00:32:15.851 18:30:26 ftl.ftl_upgrade_shutdown -- ftl/common.sh@25 -- # export spdk_dd_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:32:15.851 18:30:26 ftl.ftl_upgrade_shutdown -- ftl/common.sh@25 -- # spdk_dd_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:32:15.851 18:30:26 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@17 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:32:15.851 18:30:26 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@19 -- # export FTL_BDEV=ftl 00:32:15.851 18:30:26 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@19 -- # FTL_BDEV=ftl 00:32:15.851 18:30:26 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@20 -- # export FTL_BASE=0000:00:11.0 00:32:15.851 18:30:26 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@20 -- # FTL_BASE=0000:00:11.0 00:32:15.851 18:30:26 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@21 -- # export FTL_BASE_SIZE=20480 00:32:15.851 18:30:26 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@21 -- # FTL_BASE_SIZE=20480 00:32:15.851 18:30:26 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@22 -- # export FTL_CACHE=0000:00:10.0 00:32:15.851 18:30:26 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@22 -- # FTL_CACHE=0000:00:10.0 00:32:15.851 18:30:26 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@23 -- # export FTL_CACHE_SIZE=5120 00:32:15.851 18:30:26 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@23 -- # FTL_CACHE_SIZE=5120 00:32:15.851 18:30:26 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@24 -- # export FTL_L2P_DRAM_LIMIT=2 00:32:15.851 18:30:26 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@24 -- # FTL_L2P_DRAM_LIMIT=2 00:32:15.852 18:30:26 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@26 -- # tcp_target_setup 00:32:15.852 18:30:26 ftl.ftl_upgrade_shutdown -- ftl/common.sh@81 -- # local base_bdev= 00:32:15.852 18:30:26 ftl.ftl_upgrade_shutdown -- ftl/common.sh@82 -- # local cache_bdev= 00:32:15.852 18:30:26 ftl.ftl_upgrade_shutdown -- ftl/common.sh@84 -- # [[ -f /home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json ]] 00:32:15.852 18:30:26 ftl.ftl_upgrade_shutdown -- ftl/common.sh@89 -- # spdk_tgt_pid=83204 00:32:15.852 18:30:26 ftl.ftl_upgrade_shutdown -- ftl/common.sh@90 -- # export spdk_tgt_pid 00:32:15.852 18:30:26 ftl.ftl_upgrade_shutdown -- ftl/common.sh@91 -- # waitforlisten 83204 00:32:15.852 18:30:26 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@835 -- # '[' -z 83204 ']' 00:32:15.852 18:30:26 ftl.ftl_upgrade_shutdown -- ftl/common.sh@87 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt '--cpumask=[0]' 00:32:15.852 18:30:26 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:32:15.852 18:30:26 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@840 -- # local max_retries=100 00:32:15.852 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:32:15.852 18:30:26 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:32:15.852 18:30:26 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@844 -- # xtrace_disable 00:32:15.852 18:30:26 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@10 -- # set +x 00:32:15.852 [2024-12-06 18:30:26.380824] Starting SPDK v25.01-pre git sha1 1148849d6 / DPDK 24.03.0 initialization... 00:32:15.852 [2024-12-06 18:30:26.381131] [ DPDK EAL parameters: spdk_tgt --no-shconf -l 0 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid83204 ] 00:32:16.109 [2024-12-06 18:30:26.564701] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:32:16.109 [2024-12-06 18:30:26.677158] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:32:17.041 18:30:27 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:32:17.041 18:30:27 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@868 -- # return 0 00:32:17.041 18:30:27 ftl.ftl_upgrade_shutdown -- ftl/common.sh@93 -- # [[ -f /home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json ]] 00:32:17.041 18:30:27 ftl.ftl_upgrade_shutdown -- ftl/common.sh@99 -- # params=('FTL_BDEV' 'FTL_BASE' 'FTL_BASE_SIZE' 'FTL_CACHE' 'FTL_CACHE_SIZE' 'FTL_L2P_DRAM_LIMIT') 00:32:17.041 18:30:27 ftl.ftl_upgrade_shutdown -- ftl/common.sh@99 -- # local params 00:32:17.041 18:30:27 ftl.ftl_upgrade_shutdown -- ftl/common.sh@100 -- # for param in "${params[@]}" 00:32:17.041 18:30:27 ftl.ftl_upgrade_shutdown -- ftl/common.sh@101 -- # [[ -z ftl ]] 00:32:17.041 18:30:27 ftl.ftl_upgrade_shutdown -- ftl/common.sh@100 -- # for param in "${params[@]}" 00:32:17.041 18:30:27 ftl.ftl_upgrade_shutdown -- ftl/common.sh@101 -- # [[ -z 0000:00:11.0 ]] 00:32:17.041 18:30:27 ftl.ftl_upgrade_shutdown -- ftl/common.sh@100 -- # for param in "${params[@]}" 00:32:17.041 18:30:27 ftl.ftl_upgrade_shutdown -- ftl/common.sh@101 -- # [[ -z 20480 ]] 00:32:17.041 18:30:27 ftl.ftl_upgrade_shutdown -- ftl/common.sh@100 -- # for param in "${params[@]}" 00:32:17.041 18:30:27 ftl.ftl_upgrade_shutdown -- ftl/common.sh@101 -- # [[ -z 0000:00:10.0 ]] 00:32:17.041 18:30:27 ftl.ftl_upgrade_shutdown -- ftl/common.sh@100 -- # for param in "${params[@]}" 00:32:17.041 18:30:27 ftl.ftl_upgrade_shutdown -- ftl/common.sh@101 -- # [[ -z 5120 ]] 00:32:17.041 18:30:27 ftl.ftl_upgrade_shutdown -- ftl/common.sh@100 -- # for param in "${params[@]}" 00:32:17.041 18:30:27 ftl.ftl_upgrade_shutdown -- ftl/common.sh@101 -- # [[ -z 2 ]] 00:32:17.041 18:30:27 ftl.ftl_upgrade_shutdown -- ftl/common.sh@107 -- # create_base_bdev base 0000:00:11.0 20480 00:32:17.041 18:30:27 ftl.ftl_upgrade_shutdown -- ftl/common.sh@54 -- # local name=base 00:32:17.041 18:30:27 ftl.ftl_upgrade_shutdown -- ftl/common.sh@55 -- # local base_bdf=0000:00:11.0 00:32:17.041 18:30:27 ftl.ftl_upgrade_shutdown -- ftl/common.sh@56 -- # local size=20480 00:32:17.041 18:30:27 ftl.ftl_upgrade_shutdown -- ftl/common.sh@59 -- # local base_bdev 00:32:17.041 18:30:27 ftl.ftl_upgrade_shutdown -- ftl/common.sh@60 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_nvme_attach_controller -b base -t PCIe -a 0000:00:11.0 00:32:17.299 18:30:27 ftl.ftl_upgrade_shutdown -- ftl/common.sh@60 -- # base_bdev=basen1 00:32:17.299 18:30:27 ftl.ftl_upgrade_shutdown -- ftl/common.sh@62 -- # local base_size 00:32:17.299 18:30:27 ftl.ftl_upgrade_shutdown -- ftl/common.sh@63 -- # get_bdev_size basen1 00:32:17.299 18:30:27 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1382 -- # local bdev_name=basen1 00:32:17.299 18:30:27 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1383 -- # local bdev_info 00:32:17.299 18:30:27 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1384 -- # local bs 00:32:17.299 18:30:27 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1385 -- # local nb 00:32:17.556 18:30:27 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1386 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b basen1 00:32:17.556 18:30:28 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1386 -- # bdev_info='[ 00:32:17.556 { 00:32:17.556 "name": "basen1", 00:32:17.556 "aliases": [ 00:32:17.556 "5ccd0080-7e8e-4631-b2d4-a4af01b3ae7d" 00:32:17.556 ], 00:32:17.556 "product_name": "NVMe disk", 00:32:17.556 "block_size": 4096, 00:32:17.556 "num_blocks": 1310720, 00:32:17.556 "uuid": "5ccd0080-7e8e-4631-b2d4-a4af01b3ae7d", 00:32:17.556 "numa_id": -1, 00:32:17.556 "assigned_rate_limits": { 00:32:17.557 "rw_ios_per_sec": 0, 00:32:17.557 "rw_mbytes_per_sec": 0, 00:32:17.557 "r_mbytes_per_sec": 0, 00:32:17.557 "w_mbytes_per_sec": 0 00:32:17.557 }, 00:32:17.557 "claimed": true, 00:32:17.557 "claim_type": "read_many_write_one", 00:32:17.557 "zoned": false, 00:32:17.557 "supported_io_types": { 00:32:17.557 "read": true, 00:32:17.557 "write": true, 00:32:17.557 "unmap": true, 00:32:17.557 "flush": true, 00:32:17.557 "reset": true, 00:32:17.557 "nvme_admin": true, 00:32:17.557 "nvme_io": true, 00:32:17.557 "nvme_io_md": false, 00:32:17.557 "write_zeroes": true, 00:32:17.557 "zcopy": false, 00:32:17.557 "get_zone_info": false, 00:32:17.557 "zone_management": false, 00:32:17.557 "zone_append": false, 00:32:17.557 "compare": true, 00:32:17.557 "compare_and_write": false, 00:32:17.557 "abort": true, 00:32:17.557 "seek_hole": false, 00:32:17.557 "seek_data": false, 00:32:17.557 "copy": true, 00:32:17.557 "nvme_iov_md": false 00:32:17.557 }, 00:32:17.557 "driver_specific": { 00:32:17.557 "nvme": [ 00:32:17.557 { 00:32:17.557 "pci_address": "0000:00:11.0", 00:32:17.557 "trid": { 00:32:17.557 "trtype": "PCIe", 00:32:17.557 "traddr": "0000:00:11.0" 00:32:17.557 }, 00:32:17.557 "ctrlr_data": { 00:32:17.557 "cntlid": 0, 00:32:17.557 "vendor_id": "0x1b36", 00:32:17.557 "model_number": "QEMU NVMe Ctrl", 00:32:17.557 "serial_number": "12341", 00:32:17.557 "firmware_revision": "8.0.0", 00:32:17.557 "subnqn": "nqn.2019-08.org.qemu:12341", 00:32:17.557 "oacs": { 00:32:17.557 "security": 0, 00:32:17.557 "format": 1, 00:32:17.557 "firmware": 0, 00:32:17.557 "ns_manage": 1 00:32:17.557 }, 00:32:17.557 "multi_ctrlr": false, 00:32:17.557 "ana_reporting": false 00:32:17.557 }, 00:32:17.557 "vs": { 00:32:17.557 "nvme_version": "1.4" 00:32:17.557 }, 00:32:17.557 "ns_data": { 00:32:17.557 "id": 1, 00:32:17.557 "can_share": false 00:32:17.557 } 00:32:17.557 } 00:32:17.557 ], 00:32:17.557 "mp_policy": "active_passive" 00:32:17.557 } 00:32:17.557 } 00:32:17.557 ]' 00:32:17.557 18:30:28 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1387 -- # jq '.[] .block_size' 00:32:17.557 18:30:28 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1387 -- # bs=4096 00:32:17.557 18:30:28 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1388 -- # jq '.[] .num_blocks' 00:32:17.814 18:30:28 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1388 -- # nb=1310720 00:32:17.814 18:30:28 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1391 -- # bdev_size=5120 00:32:17.814 18:30:28 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1392 -- # echo 5120 00:32:17.814 18:30:28 ftl.ftl_upgrade_shutdown -- ftl/common.sh@63 -- # base_size=5120 00:32:17.814 18:30:28 ftl.ftl_upgrade_shutdown -- ftl/common.sh@64 -- # [[ 20480 -le 5120 ]] 00:32:17.814 18:30:28 ftl.ftl_upgrade_shutdown -- ftl/common.sh@67 -- # clear_lvols 00:32:17.814 18:30:28 ftl.ftl_upgrade_shutdown -- ftl/common.sh@28 -- # jq -r '.[] | .uuid' 00:32:17.814 18:30:28 ftl.ftl_upgrade_shutdown -- ftl/common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores 00:32:18.072 18:30:28 ftl.ftl_upgrade_shutdown -- ftl/common.sh@28 -- # stores=15d3342c-e24b-40a1-9820-baeb65fb0e76 00:32:18.072 18:30:28 ftl.ftl_upgrade_shutdown -- ftl/common.sh@29 -- # for lvs in $stores 00:32:18.072 18:30:28 ftl.ftl_upgrade_shutdown -- ftl/common.sh@30 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -u 15d3342c-e24b-40a1-9820-baeb65fb0e76 00:32:18.330 18:30:28 ftl.ftl_upgrade_shutdown -- ftl/common.sh@68 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create_lvstore basen1 lvs 00:32:18.331 18:30:28 ftl.ftl_upgrade_shutdown -- ftl/common.sh@68 -- # lvs=d355aacd-de75-4469-83d1-757ef9c75cb5 00:32:18.331 18:30:28 ftl.ftl_upgrade_shutdown -- ftl/common.sh@69 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create basen1p0 20480 -t -u d355aacd-de75-4469-83d1-757ef9c75cb5 00:32:18.588 18:30:29 ftl.ftl_upgrade_shutdown -- ftl/common.sh@107 -- # base_bdev=3e1bd575-1e8d-4050-bcc1-a1f75a004643 00:32:18.589 18:30:29 ftl.ftl_upgrade_shutdown -- ftl/common.sh@108 -- # [[ -z 3e1bd575-1e8d-4050-bcc1-a1f75a004643 ]] 00:32:18.589 18:30:29 ftl.ftl_upgrade_shutdown -- ftl/common.sh@113 -- # create_nv_cache_bdev cache 0000:00:10.0 3e1bd575-1e8d-4050-bcc1-a1f75a004643 5120 00:32:18.589 18:30:29 ftl.ftl_upgrade_shutdown -- ftl/common.sh@35 -- # local name=cache 00:32:18.589 18:30:29 ftl.ftl_upgrade_shutdown -- ftl/common.sh@36 -- # local cache_bdf=0000:00:10.0 00:32:18.589 18:30:29 ftl.ftl_upgrade_shutdown -- ftl/common.sh@37 -- # local base_bdev=3e1bd575-1e8d-4050-bcc1-a1f75a004643 00:32:18.589 18:30:29 ftl.ftl_upgrade_shutdown -- ftl/common.sh@38 -- # local cache_size=5120 00:32:18.589 18:30:29 ftl.ftl_upgrade_shutdown -- ftl/common.sh@41 -- # get_bdev_size 3e1bd575-1e8d-4050-bcc1-a1f75a004643 00:32:18.589 18:30:29 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1382 -- # local bdev_name=3e1bd575-1e8d-4050-bcc1-a1f75a004643 00:32:18.589 18:30:29 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1383 -- # local bdev_info 00:32:18.589 18:30:29 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1384 -- # local bs 00:32:18.589 18:30:29 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1385 -- # local nb 00:32:18.589 18:30:29 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1386 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b 3e1bd575-1e8d-4050-bcc1-a1f75a004643 00:32:18.847 18:30:29 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1386 -- # bdev_info='[ 00:32:18.847 { 00:32:18.847 "name": "3e1bd575-1e8d-4050-bcc1-a1f75a004643", 00:32:18.847 "aliases": [ 00:32:18.847 "lvs/basen1p0" 00:32:18.847 ], 00:32:18.847 "product_name": "Logical Volume", 00:32:18.847 "block_size": 4096, 00:32:18.847 "num_blocks": 5242880, 00:32:18.847 "uuid": "3e1bd575-1e8d-4050-bcc1-a1f75a004643", 00:32:18.847 "assigned_rate_limits": { 00:32:18.847 "rw_ios_per_sec": 0, 00:32:18.847 "rw_mbytes_per_sec": 0, 00:32:18.847 "r_mbytes_per_sec": 0, 00:32:18.847 "w_mbytes_per_sec": 0 00:32:18.847 }, 00:32:18.847 "claimed": false, 00:32:18.847 "zoned": false, 00:32:18.847 "supported_io_types": { 00:32:18.847 "read": true, 00:32:18.847 "write": true, 00:32:18.847 "unmap": true, 00:32:18.847 "flush": false, 00:32:18.847 "reset": true, 00:32:18.847 "nvme_admin": false, 00:32:18.847 "nvme_io": false, 00:32:18.847 "nvme_io_md": false, 00:32:18.847 "write_zeroes": true, 00:32:18.847 "zcopy": false, 00:32:18.847 "get_zone_info": false, 00:32:18.847 "zone_management": false, 00:32:18.847 "zone_append": false, 00:32:18.847 "compare": false, 00:32:18.847 "compare_and_write": false, 00:32:18.847 "abort": false, 00:32:18.847 "seek_hole": true, 00:32:18.847 "seek_data": true, 00:32:18.847 "copy": false, 00:32:18.847 "nvme_iov_md": false 00:32:18.847 }, 00:32:18.847 "driver_specific": { 00:32:18.847 "lvol": { 00:32:18.847 "lvol_store_uuid": "d355aacd-de75-4469-83d1-757ef9c75cb5", 00:32:18.847 "base_bdev": "basen1", 00:32:18.847 "thin_provision": true, 00:32:18.847 "num_allocated_clusters": 0, 00:32:18.847 "snapshot": false, 00:32:18.847 "clone": false, 00:32:18.847 "esnap_clone": false 00:32:18.847 } 00:32:18.847 } 00:32:18.847 } 00:32:18.847 ]' 00:32:18.847 18:30:29 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1387 -- # jq '.[] .block_size' 00:32:18.847 18:30:29 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1387 -- # bs=4096 00:32:18.847 18:30:29 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1388 -- # jq '.[] .num_blocks' 00:32:18.847 18:30:29 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1388 -- # nb=5242880 00:32:18.847 18:30:29 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1391 -- # bdev_size=20480 00:32:18.847 18:30:29 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1392 -- # echo 20480 00:32:18.847 18:30:29 ftl.ftl_upgrade_shutdown -- ftl/common.sh@41 -- # local base_size=1024 00:32:18.847 18:30:29 ftl.ftl_upgrade_shutdown -- ftl/common.sh@44 -- # local nvc_bdev 00:32:18.847 18:30:29 ftl.ftl_upgrade_shutdown -- ftl/common.sh@45 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_nvme_attach_controller -b cache -t PCIe -a 0000:00:10.0 00:32:19.105 18:30:29 ftl.ftl_upgrade_shutdown -- ftl/common.sh@45 -- # nvc_bdev=cachen1 00:32:19.105 18:30:29 ftl.ftl_upgrade_shutdown -- ftl/common.sh@47 -- # [[ -z 5120 ]] 00:32:19.105 18:30:29 ftl.ftl_upgrade_shutdown -- ftl/common.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_split_create cachen1 -s 5120 1 00:32:19.363 18:30:29 ftl.ftl_upgrade_shutdown -- ftl/common.sh@113 -- # cache_bdev=cachen1p0 00:32:19.363 18:30:29 ftl.ftl_upgrade_shutdown -- ftl/common.sh@114 -- # [[ -z cachen1p0 ]] 00:32:19.363 18:30:29 ftl.ftl_upgrade_shutdown -- ftl/common.sh@119 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -t 60 bdev_ftl_create -b ftl -d 3e1bd575-1e8d-4050-bcc1-a1f75a004643 -c cachen1p0 --l2p_dram_limit 2 00:32:19.621 [2024-12-06 18:30:30.078673] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:32:19.621 [2024-12-06 18:30:30.078871] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Check configuration 00:32:19.621 [2024-12-06 18:30:30.078903] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.005 ms 00:32:19.621 [2024-12-06 18:30:30.078914] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:32:19.621 [2024-12-06 18:30:30.079003] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:32:19.621 [2024-12-06 18:30:30.079015] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Open base bdev 00:32:19.621 [2024-12-06 18:30:30.079029] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.056 ms 00:32:19.621 [2024-12-06 18:30:30.079040] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:32:19.621 [2024-12-06 18:30:30.079064] mngt/ftl_mngt_bdev.c: 196:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl] Using cachen1p0 as write buffer cache 00:32:19.621 [2024-12-06 18:30:30.080178] mngt/ftl_mngt_bdev.c: 236:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl] Using bdev as NV Cache device 00:32:19.621 [2024-12-06 18:30:30.080210] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:32:19.621 [2024-12-06 18:30:30.080221] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Open cache bdev 00:32:19.621 [2024-12-06 18:30:30.080250] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 1.151 ms 00:32:19.621 [2024-12-06 18:30:30.080261] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:32:19.621 [2024-12-06 18:30:30.080355] mngt/ftl_mngt_md.c: 570:ftl_mngt_superblock_init: *NOTICE*: [FTL][ftl] Create new FTL, UUID aaebc61a-a923-4f65-98d7-8a7f2859da96 00:32:19.621 [2024-12-06 18:30:30.081764] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:32:19.621 [2024-12-06 18:30:30.081801] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Default-initialize superblock 00:32:19.621 [2024-12-06 18:30:30.081813] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.023 ms 00:32:19.621 [2024-12-06 18:30:30.081825] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:32:19.621 [2024-12-06 18:30:30.089172] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:32:19.621 [2024-12-06 18:30:30.089366] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize memory pools 00:32:19.621 [2024-12-06 18:30:30.089388] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 7.316 ms 00:32:19.621 [2024-12-06 18:30:30.089401] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:32:19.621 [2024-12-06 18:30:30.089455] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:32:19.621 [2024-12-06 18:30:30.089470] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize bands 00:32:19.621 [2024-12-06 18:30:30.089481] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.024 ms 00:32:19.621 [2024-12-06 18:30:30.089508] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:32:19.621 [2024-12-06 18:30:30.089580] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:32:19.621 [2024-12-06 18:30:30.089595] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Register IO device 00:32:19.621 [2024-12-06 18:30:30.089609] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.010 ms 00:32:19.621 [2024-12-06 18:30:30.089621] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:32:19.621 [2024-12-06 18:30:30.089647] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl] FTL IO channel created on app_thread 00:32:19.621 [2024-12-06 18:30:30.094997] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:32:19.621 [2024-12-06 18:30:30.095028] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize core IO channel 00:32:19.621 [2024-12-06 18:30:30.095046] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 5.364 ms 00:32:19.621 [2024-12-06 18:30:30.095057] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:32:19.621 [2024-12-06 18:30:30.095091] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:32:19.621 [2024-12-06 18:30:30.095102] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Decorate bands 00:32:19.621 [2024-12-06 18:30:30.095115] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.004 ms 00:32:19.621 [2024-12-06 18:30:30.095125] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:32:19.621 [2024-12-06 18:30:30.095163] ftl_layout.c: 613:ftl_layout_setup: *NOTICE*: [FTL][ftl] FTL layout setup mode 1 00:32:19.621 [2024-12-06 18:30:30.095315] upgrade/ftl_sb_v5.c: 92:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl] nvc layout blob store 0x150 bytes 00:32:19.621 [2024-12-06 18:30:30.095336] upgrade/ftl_sb_v5.c: 101:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl] base layout blob store 0x48 bytes 00:32:19.621 [2024-12-06 18:30:30.095350] upgrade/ftl_sb_v5.c: 109:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl] layout blob store 0x190 bytes 00:32:19.621 [2024-12-06 18:30:30.095366] ftl_layout.c: 685:ftl_layout_setup: *NOTICE*: [FTL][ftl] Base device capacity: 20480.00 MiB 00:32:19.621 [2024-12-06 18:30:30.095378] ftl_layout.c: 687:ftl_layout_setup: *NOTICE*: [FTL][ftl] NV cache device capacity: 5120.00 MiB 00:32:19.621 [2024-12-06 18:30:30.095392] ftl_layout.c: 689:ftl_layout_setup: *NOTICE*: [FTL][ftl] L2P entries: 3774873 00:32:19.621 [2024-12-06 18:30:30.095403] ftl_layout.c: 690:ftl_layout_setup: *NOTICE*: [FTL][ftl] L2P address size: 4 00:32:19.622 [2024-12-06 18:30:30.095420] ftl_layout.c: 691:ftl_layout_setup: *NOTICE*: [FTL][ftl] P2L checkpoint pages: 2048 00:32:19.622 [2024-12-06 18:30:30.095430] ftl_layout.c: 692:ftl_layout_setup: *NOTICE*: [FTL][ftl] NV cache chunk count 5 00:32:19.622 [2024-12-06 18:30:30.095443] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:32:19.622 [2024-12-06 18:30:30.095453] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize layout 00:32:19.622 [2024-12-06 18:30:30.095466] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.282 ms 00:32:19.622 [2024-12-06 18:30:30.095476] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:32:19.622 [2024-12-06 18:30:30.095556] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:32:19.622 [2024-12-06 18:30:30.095577] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Verify layout 00:32:19.622 [2024-12-06 18:30:30.095590] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.057 ms 00:32:19.622 [2024-12-06 18:30:30.095600] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:32:19.622 [2024-12-06 18:30:30.095698] ftl_layout.c: 768:ftl_layout_dump: *NOTICE*: [FTL][ftl] NV cache layout: 00:32:19.622 [2024-12-06 18:30:30.095711] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region sb 00:32:19.622 [2024-12-06 18:30:30.095725] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 0.00 MiB 00:32:19.622 [2024-12-06 18:30:30.095735] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:32:19.622 [2024-12-06 18:30:30.095748] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region l2p 00:32:19.622 [2024-12-06 18:30:30.095758] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 0.12 MiB 00:32:19.622 [2024-12-06 18:30:30.095770] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 14.50 MiB 00:32:19.622 [2024-12-06 18:30:30.095779] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region band_md 00:32:19.622 [2024-12-06 18:30:30.095791] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 14.62 MiB 00:32:19.622 [2024-12-06 18:30:30.095800] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:32:19.622 [2024-12-06 18:30:30.095814] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region band_md_mirror 00:32:19.622 [2024-12-06 18:30:30.095824] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 14.75 MiB 00:32:19.622 [2024-12-06 18:30:30.095835] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:32:19.622 [2024-12-06 18:30:30.095847] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region nvc_md 00:32:19.622 [2024-12-06 18:30:30.095859] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 47.38 MiB 00:32:19.622 [2024-12-06 18:30:30.095868] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:32:19.622 [2024-12-06 18:30:30.095883] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region nvc_md_mirror 00:32:19.622 [2024-12-06 18:30:30.095893] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 47.50 MiB 00:32:19.622 [2024-12-06 18:30:30.095904] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:32:19.622 [2024-12-06 18:30:30.095914] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region p2l0 00:32:19.622 [2024-12-06 18:30:30.095925] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 14.88 MiB 00:32:19.622 [2024-12-06 18:30:30.095935] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 8.00 MiB 00:32:19.622 [2024-12-06 18:30:30.095946] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region p2l1 00:32:19.622 [2024-12-06 18:30:30.095956] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 22.88 MiB 00:32:19.622 [2024-12-06 18:30:30.095968] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 8.00 MiB 00:32:19.622 [2024-12-06 18:30:30.095977] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region p2l2 00:32:19.622 [2024-12-06 18:30:30.095989] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 30.88 MiB 00:32:19.622 [2024-12-06 18:30:30.095997] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 8.00 MiB 00:32:19.622 [2024-12-06 18:30:30.096009] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region p2l3 00:32:19.622 [2024-12-06 18:30:30.096018] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 38.88 MiB 00:32:19.622 [2024-12-06 18:30:30.096030] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 8.00 MiB 00:32:19.622 [2024-12-06 18:30:30.096039] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region trim_md 00:32:19.622 [2024-12-06 18:30:30.096053] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 46.88 MiB 00:32:19.622 [2024-12-06 18:30:30.096062] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:32:19.622 [2024-12-06 18:30:30.096073] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region trim_md_mirror 00:32:19.622 [2024-12-06 18:30:30.096083] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 47.00 MiB 00:32:19.622 [2024-12-06 18:30:30.096095] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:32:19.622 [2024-12-06 18:30:30.096105] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region trim_log 00:32:19.622 [2024-12-06 18:30:30.096116] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 47.12 MiB 00:32:19.622 [2024-12-06 18:30:30.096126] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:32:19.622 [2024-12-06 18:30:30.096137] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region trim_log_mirror 00:32:19.622 [2024-12-06 18:30:30.096146] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 47.25 MiB 00:32:19.622 [2024-12-06 18:30:30.096157] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:32:19.622 [2024-12-06 18:30:30.096166] ftl_layout.c: 775:ftl_layout_dump: *NOTICE*: [FTL][ftl] Base device layout: 00:32:19.622 [2024-12-06 18:30:30.096179] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region sb_mirror 00:32:19.622 [2024-12-06 18:30:30.096189] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 0.00 MiB 00:32:19.622 [2024-12-06 18:30:30.096202] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:32:19.622 [2024-12-06 18:30:30.096212] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region vmap 00:32:19.622 [2024-12-06 18:30:30.096227] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 18432.25 MiB 00:32:19.622 [2024-12-06 18:30:30.096236] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.88 MiB 00:32:19.622 [2024-12-06 18:30:30.096248] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region data_btm 00:32:19.622 [2024-12-06 18:30:30.096257] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 0.25 MiB 00:32:19.622 [2024-12-06 18:30:30.096280] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 18432.00 MiB 00:32:19.622 [2024-12-06 18:30:30.096291] upgrade/ftl_sb_v5.c: 408:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] SB metadata layout - nvc: 00:32:19.622 [2024-12-06 18:30:30.096309] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x0 ver:5 blk_offs:0x0 blk_sz:0x20 00:32:19.622 [2024-12-06 18:30:30.096321] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x2 ver:0 blk_offs:0x20 blk_sz:0xe80 00:32:19.622 [2024-12-06 18:30:30.096334] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x3 ver:2 blk_offs:0xea0 blk_sz:0x20 00:32:19.622 [2024-12-06 18:30:30.096344] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x4 ver:2 blk_offs:0xec0 blk_sz:0x20 00:32:19.622 [2024-12-06 18:30:30.096357] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xa ver:2 blk_offs:0xee0 blk_sz:0x800 00:32:19.622 [2024-12-06 18:30:30.096367] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xb ver:2 blk_offs:0x16e0 blk_sz:0x800 00:32:19.622 [2024-12-06 18:30:30.096380] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xc ver:2 blk_offs:0x1ee0 blk_sz:0x800 00:32:19.622 [2024-12-06 18:30:30.096390] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xd ver:2 blk_offs:0x26e0 blk_sz:0x800 00:32:19.622 [2024-12-06 18:30:30.096404] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xe ver:0 blk_offs:0x2ee0 blk_sz:0x20 00:32:19.622 [2024-12-06 18:30:30.096415] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xf ver:0 blk_offs:0x2f00 blk_sz:0x20 00:32:19.622 [2024-12-06 18:30:30.096431] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x10 ver:1 blk_offs:0x2f20 blk_sz:0x20 00:32:19.622 [2024-12-06 18:30:30.096441] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x11 ver:1 blk_offs:0x2f40 blk_sz:0x20 00:32:19.622 [2024-12-06 18:30:30.096454] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x6 ver:2 blk_offs:0x2f60 blk_sz:0x20 00:32:19.622 [2024-12-06 18:30:30.096464] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x7 ver:2 blk_offs:0x2f80 blk_sz:0x20 00:32:19.622 [2024-12-06 18:30:30.096477] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xfffffffe ver:0 blk_offs:0x2fa0 blk_sz:0x13d060 00:32:19.622 [2024-12-06 18:30:30.096487] upgrade/ftl_sb_v5.c: 422:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] SB metadata layout - base dev: 00:32:19.622 [2024-12-06 18:30:30.096501] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x1 ver:5 blk_offs:0x0 blk_sz:0x20 00:32:19.622 [2024-12-06 18:30:30.096512] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xfffffffe ver:0 blk_offs:0x20 blk_sz:0x20 00:32:19.622 [2024-12-06 18:30:30.096525] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x9 ver:0 blk_offs:0x40 blk_sz:0x480000 00:32:19.622 [2024-12-06 18:30:30.096535] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x5 ver:0 blk_offs:0x480040 blk_sz:0xe0 00:32:19.622 [2024-12-06 18:30:30.096548] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xfffffffe ver:0 blk_offs:0x480120 blk_sz:0x7fee0 00:32:19.622 [2024-12-06 18:30:30.096559] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:32:19.622 [2024-12-06 18:30:30.096572] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Layout upgrade 00:32:19.622 [2024-12-06 18:30:30.096583] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.922 ms 00:32:19.622 [2024-12-06 18:30:30.096596] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:32:19.622 [2024-12-06 18:30:30.096637] mngt/ftl_mngt_misc.c: 165:ftl_mngt_scrub_nv_cache: *NOTICE*: [FTL][ftl] NV cache data region needs scrubbing, this may take a while. 00:32:19.622 [2024-12-06 18:30:30.096655] mngt/ftl_mngt_misc.c: 166:ftl_mngt_scrub_nv_cache: *NOTICE*: [FTL][ftl] Scrubbing 5 chunks 00:32:26.190 [2024-12-06 18:30:35.830150] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:32:26.190 [2024-12-06 18:30:35.830453] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Scrub NV cache 00:32:26.190 [2024-12-06 18:30:35.830568] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 5742.821 ms 00:32:26.190 [2024-12-06 18:30:35.830612] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:32:26.190 [2024-12-06 18:30:35.868925] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:32:26.190 [2024-12-06 18:30:35.869189] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize metadata 00:32:26.190 [2024-12-06 18:30:35.869316] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 37.999 ms 00:32:26.190 [2024-12-06 18:30:35.869361] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:32:26.190 [2024-12-06 18:30:35.869486] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:32:26.190 [2024-12-06 18:30:35.869586] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize band addresses 00:32:26.190 [2024-12-06 18:30:35.869625] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.013 ms 00:32:26.190 [2024-12-06 18:30:35.869665] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:32:26.190 [2024-12-06 18:30:35.915546] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:32:26.190 [2024-12-06 18:30:35.915801] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize NV cache 00:32:26.190 [2024-12-06 18:30:35.915931] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 45.836 ms 00:32:26.190 [2024-12-06 18:30:35.915975] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:32:26.190 [2024-12-06 18:30:35.916037] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:32:26.190 [2024-12-06 18:30:35.916078] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize valid map 00:32:26.190 [2024-12-06 18:30:35.916109] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.003 ms 00:32:26.190 [2024-12-06 18:30:35.916141] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:32:26.190 [2024-12-06 18:30:35.916665] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:32:26.190 [2024-12-06 18:30:35.916792] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize trim map 00:32:26.190 [2024-12-06 18:30:35.916885] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.421 ms 00:32:26.190 [2024-12-06 18:30:35.916925] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:32:26.190 [2024-12-06 18:30:35.916990] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:32:26.190 [2024-12-06 18:30:35.917025] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize bands metadata 00:32:26.190 [2024-12-06 18:30:35.917149] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.019 ms 00:32:26.190 [2024-12-06 18:30:35.917249] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:32:26.190 [2024-12-06 18:30:35.936783] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:32:26.190 [2024-12-06 18:30:35.936951] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize reloc 00:32:26.190 [2024-12-06 18:30:35.937080] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 19.504 ms 00:32:26.190 [2024-12-06 18:30:35.937121] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:32:26.190 [2024-12-06 18:30:35.963982] ftl_l2p_cache.c: 458:ftl_l2p_cache_init: *NOTICE*: l2p maximum resident size is: 1 (of 2) MiB 00:32:26.190 [2024-12-06 18:30:35.965242] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:32:26.190 [2024-12-06 18:30:35.965284] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize L2P 00:32:26.190 [2024-12-06 18:30:35.965306] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 27.959 ms 00:32:26.190 [2024-12-06 18:30:35.965320] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:32:26.190 [2024-12-06 18:30:36.008590] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:32:26.190 [2024-12-06 18:30:36.008768] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Clear L2P 00:32:26.190 [2024-12-06 18:30:36.008796] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 43.293 ms 00:32:26.190 [2024-12-06 18:30:36.008808] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:32:26.190 [2024-12-06 18:30:36.008945] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:32:26.190 [2024-12-06 18:30:36.008963] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Finalize band initialization 00:32:26.190 [2024-12-06 18:30:36.008980] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.048 ms 00:32:26.190 [2024-12-06 18:30:36.008990] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:32:26.190 [2024-12-06 18:30:36.046296] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:32:26.190 [2024-12-06 18:30:36.046347] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Save initial band info metadata 00:32:26.190 [2024-12-06 18:30:36.046366] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 37.300 ms 00:32:26.190 [2024-12-06 18:30:36.046377] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:32:26.190 [2024-12-06 18:30:36.083113] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:32:26.190 [2024-12-06 18:30:36.083154] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Save initial chunk info metadata 00:32:26.190 [2024-12-06 18:30:36.083172] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 36.734 ms 00:32:26.190 [2024-12-06 18:30:36.083182] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:32:26.190 [2024-12-06 18:30:36.083930] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:32:26.190 [2024-12-06 18:30:36.083959] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize P2L checkpointing 00:32:26.190 [2024-12-06 18:30:36.083974] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.707 ms 00:32:26.190 [2024-12-06 18:30:36.083987] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:32:26.190 [2024-12-06 18:30:36.209816] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:32:26.190 [2024-12-06 18:30:36.210050] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Wipe P2L region 00:32:26.190 [2024-12-06 18:30:36.210085] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 125.970 ms 00:32:26.190 [2024-12-06 18:30:36.210097] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:32:26.190 [2024-12-06 18:30:36.249068] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:32:26.190 [2024-12-06 18:30:36.249248] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Clear trim map 00:32:26.190 [2024-12-06 18:30:36.249292] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 38.913 ms 00:32:26.190 [2024-12-06 18:30:36.249303] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:32:26.190 [2024-12-06 18:30:36.287385] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:32:26.190 [2024-12-06 18:30:36.287537] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Clear trim log 00:32:26.190 [2024-12-06 18:30:36.287565] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 38.093 ms 00:32:26.190 [2024-12-06 18:30:36.287576] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:32:26.190 [2024-12-06 18:30:36.324867] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:32:26.190 [2024-12-06 18:30:36.324915] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Set FTL dirty state 00:32:26.190 [2024-12-06 18:30:36.324934] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 37.305 ms 00:32:26.190 [2024-12-06 18:30:36.324945] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:32:26.190 [2024-12-06 18:30:36.324998] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:32:26.190 [2024-12-06 18:30:36.325010] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Start core poller 00:32:26.190 [2024-12-06 18:30:36.325028] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.006 ms 00:32:26.190 [2024-12-06 18:30:36.325038] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:32:26.190 [2024-12-06 18:30:36.325144] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:32:26.190 [2024-12-06 18:30:36.325160] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Finalize initialization 00:32:26.190 [2024-12-06 18:30:36.325174] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.035 ms 00:32:26.190 [2024-12-06 18:30:36.325184] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:32:26.190 [2024-12-06 18:30:36.326208] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl] Management process finished, name 'FTL startup', duration = 6257.258 ms, result 0 00:32:26.190 { 00:32:26.190 "name": "ftl", 00:32:26.190 "uuid": "aaebc61a-a923-4f65-98d7-8a7f2859da96" 00:32:26.190 } 00:32:26.190 18:30:36 ftl.ftl_upgrade_shutdown -- ftl/common.sh@121 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_transport --trtype TCP 00:32:26.190 [2024-12-06 18:30:36.549089] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:32:26.190 18:30:36 ftl.ftl_upgrade_shutdown -- ftl/common.sh@122 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2018-09.io.spdk:cnode0 -a -m 1 00:32:26.449 18:30:36 ftl.ftl_upgrade_shutdown -- ftl/common.sh@123 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2018-09.io.spdk:cnode0 ftl 00:32:26.450 [2024-12-06 18:30:36.972775] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl] FTL IO channel created on nvmf_tgt_poll_group_000 00:32:26.450 18:30:36 ftl.ftl_upgrade_shutdown -- ftl/common.sh@124 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2018-09.io.spdk:cnode0 -t TCP -f ipv4 -s 4420 -a 127.0.0.1 00:32:26.709 [2024-12-06 18:30:37.250097] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4420 *** 00:32:26.709 18:30:37 ftl.ftl_upgrade_shutdown -- ftl/common.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py save_config 00:32:27.276 Fill FTL, iteration 1 00:32:27.276 18:30:37 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@28 -- # size=1073741824 00:32:27.276 18:30:37 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@29 -- # seek=0 00:32:27.276 18:30:37 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@30 -- # skip=0 00:32:27.276 18:30:37 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@31 -- # bs=1048576 00:32:27.276 18:30:37 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@32 -- # count=1024 00:32:27.276 18:30:37 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@33 -- # iterations=2 00:32:27.276 18:30:37 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@34 -- # qd=2 00:32:27.276 18:30:37 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@35 -- # sums=() 00:32:27.276 18:30:37 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@38 -- # (( i = 0 )) 00:32:27.276 18:30:37 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@38 -- # (( i < iterations )) 00:32:27.276 18:30:37 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@39 -- # echo 'Fill FTL, iteration 1' 00:32:27.276 18:30:37 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@40 -- # tcp_dd --if=/dev/urandom --ob=ftln1 --bs=1048576 --count=1024 --qd=2 --seek=0 00:32:27.276 18:30:37 ftl.ftl_upgrade_shutdown -- ftl/common.sh@198 -- # tcp_initiator_setup 00:32:27.276 18:30:37 ftl.ftl_upgrade_shutdown -- ftl/common.sh@151 -- # local 'rpc=/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.tgt.sock' 00:32:27.276 18:30:37 ftl.ftl_upgrade_shutdown -- ftl/common.sh@153 -- # [[ -f /home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json ]] 00:32:27.276 18:30:37 ftl.ftl_upgrade_shutdown -- ftl/common.sh@157 -- # [[ -z ftl ]] 00:32:27.276 18:30:37 ftl.ftl_upgrade_shutdown -- ftl/common.sh@162 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt '--cpumask=[1]' --rpc-socket=/var/tmp/spdk.tgt.sock 00:32:27.276 18:30:37 ftl.ftl_upgrade_shutdown -- ftl/common.sh@163 -- # spdk_ini_pid=83355 00:32:27.276 18:30:37 ftl.ftl_upgrade_shutdown -- ftl/common.sh@164 -- # export spdk_ini_pid 00:32:27.276 18:30:37 ftl.ftl_upgrade_shutdown -- ftl/common.sh@165 -- # waitforlisten 83355 /var/tmp/spdk.tgt.sock 00:32:27.276 18:30:37 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@835 -- # '[' -z 83355 ']' 00:32:27.276 18:30:37 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.tgt.sock 00:32:27.276 18:30:37 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@840 -- # local max_retries=100 00:32:27.276 18:30:37 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.tgt.sock...' 00:32:27.276 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.tgt.sock... 00:32:27.276 18:30:37 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@844 -- # xtrace_disable 00:32:27.276 18:30:37 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@10 -- # set +x 00:32:27.276 [2024-12-06 18:30:37.712403] Starting SPDK v25.01-pre git sha1 1148849d6 / DPDK 24.03.0 initialization... 00:32:27.276 [2024-12-06 18:30:37.712762] [ DPDK EAL parameters: spdk_tgt --no-shconf -l 1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid83355 ] 00:32:27.535 [2024-12-06 18:30:37.893473] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:32:27.535 [2024-12-06 18:30:38.009989] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:32:28.474 18:30:38 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:32:28.474 18:30:38 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@868 -- # return 0 00:32:28.474 18:30:38 ftl.ftl_upgrade_shutdown -- ftl/common.sh@167 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.tgt.sock bdev_nvme_attach_controller -b ftl -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2018-09.io.spdk:cnode0 00:32:28.733 ftln1 00:32:28.733 18:30:39 ftl.ftl_upgrade_shutdown -- ftl/common.sh@171 -- # echo '{"subsystems": [' 00:32:28.733 18:30:39 ftl.ftl_upgrade_shutdown -- ftl/common.sh@172 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.tgt.sock save_subsystem_config -n bdev 00:32:28.992 18:30:39 ftl.ftl_upgrade_shutdown -- ftl/common.sh@173 -- # echo ']}' 00:32:28.992 18:30:39 ftl.ftl_upgrade_shutdown -- ftl/common.sh@176 -- # killprocess 83355 00:32:28.992 18:30:39 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@954 -- # '[' -z 83355 ']' 00:32:28.992 18:30:39 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@958 -- # kill -0 83355 00:32:28.992 18:30:39 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@959 -- # uname 00:32:28.992 18:30:39 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:32:28.992 18:30:39 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 83355 00:32:28.992 killing process with pid 83355 00:32:28.992 18:30:39 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:32:28.992 18:30:39 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:32:28.992 18:30:39 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@972 -- # echo 'killing process with pid 83355' 00:32:28.992 18:30:39 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@973 -- # kill 83355 00:32:28.992 18:30:39 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@978 -- # wait 83355 00:32:31.527 18:30:41 ftl.ftl_upgrade_shutdown -- ftl/common.sh@177 -- # unset spdk_ini_pid 00:32:31.527 18:30:41 ftl.ftl_upgrade_shutdown -- ftl/common.sh@199 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd '--cpumask=[1]' --rpc-socket=/var/tmp/spdk.tgt.sock --json=/home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json --if=/dev/urandom --ob=ftln1 --bs=1048576 --count=1024 --qd=2 --seek=0 00:32:31.527 [2024-12-06 18:30:41.797832] Starting SPDK v25.01-pre git sha1 1148849d6 / DPDK 24.03.0 initialization... 00:32:31.527 [2024-12-06 18:30:41.797955] [ DPDK EAL parameters: spdk_dd --no-shconf -l 1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid83408 ] 00:32:31.527 [2024-12-06 18:30:41.979528] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:32:31.528 [2024-12-06 18:30:42.088829] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:32:33.452  [2024-12-06T18:30:44.597Z] Copying: 250/1024 [MB] (250 MBps) [2024-12-06T18:30:45.972Z] Copying: 498/1024 [MB] (248 MBps) [2024-12-06T18:30:46.906Z] Copying: 745/1024 [MB] (247 MBps) [2024-12-06T18:30:46.906Z] Copying: 993/1024 [MB] (248 MBps) [2024-12-06T18:30:47.835Z] Copying: 1024/1024 [MB] (average 248 MBps) 00:32:37.259 00:32:37.517 Calculate MD5 checksum, iteration 1 00:32:37.517 18:30:47 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@41 -- # seek=1024 00:32:37.517 18:30:47 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@43 -- # echo 'Calculate MD5 checksum, iteration 1' 00:32:37.517 18:30:47 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@44 -- # tcp_dd --ib=ftln1 --of=/home/vagrant/spdk_repo/spdk/test/ftl/file --bs=1048576 --count=1024 --qd=2 --skip=0 00:32:37.517 18:30:47 ftl.ftl_upgrade_shutdown -- ftl/common.sh@198 -- # tcp_initiator_setup 00:32:37.517 18:30:47 ftl.ftl_upgrade_shutdown -- ftl/common.sh@151 -- # local 'rpc=/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.tgt.sock' 00:32:37.517 18:30:47 ftl.ftl_upgrade_shutdown -- ftl/common.sh@153 -- # [[ -f /home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json ]] 00:32:37.517 18:30:47 ftl.ftl_upgrade_shutdown -- ftl/common.sh@154 -- # return 0 00:32:37.517 18:30:47 ftl.ftl_upgrade_shutdown -- ftl/common.sh@199 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd '--cpumask=[1]' --rpc-socket=/var/tmp/spdk.tgt.sock --json=/home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json --ib=ftln1 --of=/home/vagrant/spdk_repo/spdk/test/ftl/file --bs=1048576 --count=1024 --qd=2 --skip=0 00:32:37.517 [2024-12-06 18:30:47.939919] Starting SPDK v25.01-pre git sha1 1148849d6 / DPDK 24.03.0 initialization... 00:32:37.517 [2024-12-06 18:30:47.940523] [ DPDK EAL parameters: spdk_dd --no-shconf -l 1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid83472 ] 00:32:37.774 [2024-12-06 18:30:48.122249] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:32:37.774 [2024-12-06 18:30:48.232298] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:32:39.154  [2024-12-06T18:30:50.309Z] Copying: 690/1024 [MB] (690 MBps) [2024-12-06T18:30:51.247Z] Copying: 1024/1024 [MB] (average 672 MBps) 00:32:40.671 00:32:40.671 18:30:51 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@45 -- # skip=1024 00:32:40.671 18:30:51 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@47 -- # md5sum /home/vagrant/spdk_repo/spdk/test/ftl/file 00:32:42.599 18:30:52 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@48 -- # cut -f1 '-d ' 00:32:42.599 18:30:52 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@48 -- # sums[i]=1644d2601499acc57cc829717d8638b6 00:32:42.599 Fill FTL, iteration 2 00:32:42.599 18:30:52 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@38 -- # (( i++ )) 00:32:42.599 18:30:52 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@38 -- # (( i < iterations )) 00:32:42.599 18:30:52 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@39 -- # echo 'Fill FTL, iteration 2' 00:32:42.599 18:30:52 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@40 -- # tcp_dd --if=/dev/urandom --ob=ftln1 --bs=1048576 --count=1024 --qd=2 --seek=1024 00:32:42.599 18:30:52 ftl.ftl_upgrade_shutdown -- ftl/common.sh@198 -- # tcp_initiator_setup 00:32:42.599 18:30:52 ftl.ftl_upgrade_shutdown -- ftl/common.sh@151 -- # local 'rpc=/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.tgt.sock' 00:32:42.599 18:30:52 ftl.ftl_upgrade_shutdown -- ftl/common.sh@153 -- # [[ -f /home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json ]] 00:32:42.599 18:30:52 ftl.ftl_upgrade_shutdown -- ftl/common.sh@154 -- # return 0 00:32:42.599 18:30:52 ftl.ftl_upgrade_shutdown -- ftl/common.sh@199 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd '--cpumask=[1]' --rpc-socket=/var/tmp/spdk.tgt.sock --json=/home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json --if=/dev/urandom --ob=ftln1 --bs=1048576 --count=1024 --qd=2 --seek=1024 00:32:42.599 [2024-12-06 18:30:52.927076] Starting SPDK v25.01-pre git sha1 1148849d6 / DPDK 24.03.0 initialization... 00:32:42.599 [2024-12-06 18:30:52.927443] [ DPDK EAL parameters: spdk_dd --no-shconf -l 1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid83528 ] 00:32:42.599 [2024-12-06 18:30:53.109721] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:32:42.859 [2024-12-06 18:30:53.224255] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:32:44.236  [2024-12-06T18:30:55.746Z] Copying: 240/1024 [MB] (240 MBps) [2024-12-06T18:30:56.681Z] Copying: 475/1024 [MB] (235 MBps) [2024-12-06T18:30:58.076Z] Copying: 716/1024 [MB] (241 MBps) [2024-12-06T18:30:58.076Z] Copying: 958/1024 [MB] (242 MBps) [2024-12-06T18:30:59.453Z] Copying: 1024/1024 [MB] (average 239 MBps) 00:32:48.877 00:32:48.877 Calculate MD5 checksum, iteration 2 00:32:48.877 18:30:59 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@41 -- # seek=2048 00:32:48.877 18:30:59 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@43 -- # echo 'Calculate MD5 checksum, iteration 2' 00:32:48.877 18:30:59 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@44 -- # tcp_dd --ib=ftln1 --of=/home/vagrant/spdk_repo/spdk/test/ftl/file --bs=1048576 --count=1024 --qd=2 --skip=1024 00:32:48.877 18:30:59 ftl.ftl_upgrade_shutdown -- ftl/common.sh@198 -- # tcp_initiator_setup 00:32:48.877 18:30:59 ftl.ftl_upgrade_shutdown -- ftl/common.sh@151 -- # local 'rpc=/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.tgt.sock' 00:32:48.877 18:30:59 ftl.ftl_upgrade_shutdown -- ftl/common.sh@153 -- # [[ -f /home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json ]] 00:32:48.877 18:30:59 ftl.ftl_upgrade_shutdown -- ftl/common.sh@154 -- # return 0 00:32:48.877 18:30:59 ftl.ftl_upgrade_shutdown -- ftl/common.sh@199 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd '--cpumask=[1]' --rpc-socket=/var/tmp/spdk.tgt.sock --json=/home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json --ib=ftln1 --of=/home/vagrant/spdk_repo/spdk/test/ftl/file --bs=1048576 --count=1024 --qd=2 --skip=1024 00:32:48.877 [2024-12-06 18:30:59.212964] Starting SPDK v25.01-pre git sha1 1148849d6 / DPDK 24.03.0 initialization... 00:32:48.877 [2024-12-06 18:30:59.213519] [ DPDK EAL parameters: spdk_dd --no-shconf -l 1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid83592 ] 00:32:48.877 [2024-12-06 18:30:59.394447] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:32:49.137 [2024-12-06 18:30:59.511390] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:32:51.043  [2024-12-06T18:31:01.878Z] Copying: 672/1024 [MB] (672 MBps) [2024-12-06T18:31:03.256Z] Copying: 1024/1024 [MB] (average 667 MBps) 00:32:52.680 00:32:52.680 18:31:03 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@45 -- # skip=2048 00:32:52.680 18:31:03 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@47 -- # md5sum /home/vagrant/spdk_repo/spdk/test/ftl/file 00:32:54.590 18:31:04 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@48 -- # cut -f1 '-d ' 00:32:54.590 18:31:04 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@48 -- # sums[i]=f001df28c6d6d2eaf228d1b870c19689 00:32:54.590 18:31:04 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@38 -- # (( i++ )) 00:32:54.590 18:31:04 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@38 -- # (( i < iterations )) 00:32:54.590 18:31:04 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@52 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_ftl_set_property -b ftl -p verbose_mode -v true 00:32:54.590 [2024-12-06 18:31:04.938772] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:32:54.590 [2024-12-06 18:31:04.938825] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Decode property 00:32:54.590 [2024-12-06 18:31:04.938843] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.008 ms 00:32:54.591 [2024-12-06 18:31:04.938854] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:32:54.591 [2024-12-06 18:31:04.938883] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:32:54.591 [2024-12-06 18:31:04.938900] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Set property 00:32:54.591 [2024-12-06 18:31:04.938910] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.002 ms 00:32:54.591 [2024-12-06 18:31:04.938920] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:32:54.591 [2024-12-06 18:31:04.938941] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:32:54.591 [2024-12-06 18:31:04.938952] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Property setting cleanup 00:32:54.591 [2024-12-06 18:31:04.938963] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.001 ms 00:32:54.591 [2024-12-06 18:31:04.938973] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:32:54.591 [2024-12-06 18:31:04.939038] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl] Management process finished, name 'Set FTL property', duration = 0.254 ms, result 0 00:32:54.591 true 00:32:54.591 18:31:04 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@53 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_ftl_get_properties -b ftl 00:32:54.591 { 00:32:54.591 "name": "ftl", 00:32:54.591 "properties": [ 00:32:54.591 { 00:32:54.591 "name": "superblock_version", 00:32:54.591 "value": 5, 00:32:54.591 "read-only": true 00:32:54.591 }, 00:32:54.591 { 00:32:54.591 "name": "base_device", 00:32:54.591 "bands": [ 00:32:54.591 { 00:32:54.591 "id": 0, 00:32:54.591 "state": "FREE", 00:32:54.591 "validity": 0.0 00:32:54.591 }, 00:32:54.591 { 00:32:54.591 "id": 1, 00:32:54.591 "state": "FREE", 00:32:54.591 "validity": 0.0 00:32:54.591 }, 00:32:54.591 { 00:32:54.591 "id": 2, 00:32:54.591 "state": "FREE", 00:32:54.591 "validity": 0.0 00:32:54.591 }, 00:32:54.591 { 00:32:54.591 "id": 3, 00:32:54.591 "state": "FREE", 00:32:54.591 "validity": 0.0 00:32:54.591 }, 00:32:54.591 { 00:32:54.591 "id": 4, 00:32:54.591 "state": "FREE", 00:32:54.591 "validity": 0.0 00:32:54.591 }, 00:32:54.591 { 00:32:54.591 "id": 5, 00:32:54.591 "state": "FREE", 00:32:54.591 "validity": 0.0 00:32:54.591 }, 00:32:54.591 { 00:32:54.591 "id": 6, 00:32:54.591 "state": "FREE", 00:32:54.591 "validity": 0.0 00:32:54.591 }, 00:32:54.591 { 00:32:54.591 "id": 7, 00:32:54.591 "state": "FREE", 00:32:54.591 "validity": 0.0 00:32:54.591 }, 00:32:54.591 { 00:32:54.591 "id": 8, 00:32:54.591 "state": "FREE", 00:32:54.591 "validity": 0.0 00:32:54.591 }, 00:32:54.591 { 00:32:54.591 "id": 9, 00:32:54.591 "state": "FREE", 00:32:54.591 "validity": 0.0 00:32:54.591 }, 00:32:54.591 { 00:32:54.591 "id": 10, 00:32:54.591 "state": "FREE", 00:32:54.591 "validity": 0.0 00:32:54.591 }, 00:32:54.591 { 00:32:54.591 "id": 11, 00:32:54.591 "state": "FREE", 00:32:54.591 "validity": 0.0 00:32:54.591 }, 00:32:54.591 { 00:32:54.591 "id": 12, 00:32:54.591 "state": "FREE", 00:32:54.591 "validity": 0.0 00:32:54.591 }, 00:32:54.591 { 00:32:54.591 "id": 13, 00:32:54.591 "state": "FREE", 00:32:54.591 "validity": 0.0 00:32:54.591 }, 00:32:54.591 { 00:32:54.591 "id": 14, 00:32:54.591 "state": "FREE", 00:32:54.591 "validity": 0.0 00:32:54.591 }, 00:32:54.591 { 00:32:54.591 "id": 15, 00:32:54.591 "state": "FREE", 00:32:54.591 "validity": 0.0 00:32:54.591 }, 00:32:54.591 { 00:32:54.591 "id": 16, 00:32:54.591 "state": "FREE", 00:32:54.591 "validity": 0.0 00:32:54.591 }, 00:32:54.591 { 00:32:54.591 "id": 17, 00:32:54.591 "state": "FREE", 00:32:54.591 "validity": 0.0 00:32:54.591 } 00:32:54.591 ], 00:32:54.591 "read-only": true 00:32:54.591 }, 00:32:54.591 { 00:32:54.591 "name": "cache_device", 00:32:54.591 "type": "bdev", 00:32:54.591 "chunks": [ 00:32:54.591 { 00:32:54.591 "id": 0, 00:32:54.591 "state": "INACTIVE", 00:32:54.591 "utilization": 0.0 00:32:54.591 }, 00:32:54.591 { 00:32:54.591 "id": 1, 00:32:54.591 "state": "CLOSED", 00:32:54.591 "utilization": 1.0 00:32:54.591 }, 00:32:54.591 { 00:32:54.591 "id": 2, 00:32:54.591 "state": "CLOSED", 00:32:54.591 "utilization": 1.0 00:32:54.591 }, 00:32:54.591 { 00:32:54.591 "id": 3, 00:32:54.591 "state": "OPEN", 00:32:54.591 "utilization": 0.001953125 00:32:54.591 }, 00:32:54.591 { 00:32:54.591 "id": 4, 00:32:54.591 "state": "OPEN", 00:32:54.591 "utilization": 0.0 00:32:54.591 } 00:32:54.591 ], 00:32:54.591 "read-only": true 00:32:54.591 }, 00:32:54.591 { 00:32:54.591 "name": "verbose_mode", 00:32:54.591 "value": true, 00:32:54.591 "unit": "", 00:32:54.591 "desc": "In verbose mode, user is able to get access to additional advanced FTL properties" 00:32:54.591 }, 00:32:54.591 { 00:32:54.591 "name": "prep_upgrade_on_shutdown", 00:32:54.591 "value": false, 00:32:54.591 "unit": "", 00:32:54.591 "desc": "During shutdown, FTL executes all actions which are needed for upgrade to a new version" 00:32:54.591 } 00:32:54.591 ] 00:32:54.591 } 00:32:54.591 18:31:05 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@56 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_ftl_set_property -b ftl -p prep_upgrade_on_shutdown -v true 00:32:54.849 [2024-12-06 18:31:05.314587] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:32:54.849 [2024-12-06 18:31:05.314640] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Decode property 00:32:54.849 [2024-12-06 18:31:05.314657] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.007 ms 00:32:54.849 [2024-12-06 18:31:05.314668] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:32:54.849 [2024-12-06 18:31:05.314693] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:32:54.849 [2024-12-06 18:31:05.314705] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Set property 00:32:54.849 [2024-12-06 18:31:05.314715] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.002 ms 00:32:54.849 [2024-12-06 18:31:05.314725] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:32:54.849 [2024-12-06 18:31:05.314745] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:32:54.849 [2024-12-06 18:31:05.314755] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Property setting cleanup 00:32:54.849 [2024-12-06 18:31:05.314766] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.001 ms 00:32:54.849 [2024-12-06 18:31:05.314775] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:32:54.849 [2024-12-06 18:31:05.314834] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl] Management process finished, name 'Set FTL property', duration = 0.239 ms, result 0 00:32:54.849 true 00:32:54.849 18:31:05 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@63 -- # ftl_get_properties 00:32:54.849 18:31:05 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@63 -- # jq '[.properties[] | select(.name == "cache_device") | .chunks[] | select(.utilization != 0.0)] | length' 00:32:54.849 18:31:05 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@59 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_ftl_get_properties -b ftl 00:32:55.108 18:31:05 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@63 -- # used=3 00:32:55.108 18:31:05 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@64 -- # [[ 3 -eq 0 ]] 00:32:55.108 18:31:05 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@70 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_ftl_set_property -b ftl -p verbose_mode -v true 00:32:55.366 [2024-12-06 18:31:05.761473] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:32:55.366 [2024-12-06 18:31:05.761705] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Decode property 00:32:55.366 [2024-12-06 18:31:05.761731] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.007 ms 00:32:55.367 [2024-12-06 18:31:05.761749] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:32:55.367 [2024-12-06 18:31:05.761794] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:32:55.367 [2024-12-06 18:31:05.761806] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Set property 00:32:55.367 [2024-12-06 18:31:05.761817] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.003 ms 00:32:55.367 [2024-12-06 18:31:05.761827] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:32:55.367 [2024-12-06 18:31:05.761847] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:32:55.367 [2024-12-06 18:31:05.761858] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Property setting cleanup 00:32:55.367 [2024-12-06 18:31:05.761867] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.001 ms 00:32:55.367 [2024-12-06 18:31:05.761877] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:32:55.367 [2024-12-06 18:31:05.761943] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl] Management process finished, name 'Set FTL property', duration = 0.454 ms, result 0 00:32:55.367 true 00:32:55.367 18:31:05 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@71 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_ftl_get_properties -b ftl 00:32:55.625 { 00:32:55.625 "name": "ftl", 00:32:55.625 "properties": [ 00:32:55.625 { 00:32:55.625 "name": "superblock_version", 00:32:55.625 "value": 5, 00:32:55.625 "read-only": true 00:32:55.625 }, 00:32:55.625 { 00:32:55.625 "name": "base_device", 00:32:55.625 "bands": [ 00:32:55.625 { 00:32:55.625 "id": 0, 00:32:55.625 "state": "FREE", 00:32:55.625 "validity": 0.0 00:32:55.625 }, 00:32:55.625 { 00:32:55.625 "id": 1, 00:32:55.625 "state": "FREE", 00:32:55.625 "validity": 0.0 00:32:55.625 }, 00:32:55.625 { 00:32:55.625 "id": 2, 00:32:55.625 "state": "FREE", 00:32:55.625 "validity": 0.0 00:32:55.625 }, 00:32:55.625 { 00:32:55.625 "id": 3, 00:32:55.625 "state": "FREE", 00:32:55.625 "validity": 0.0 00:32:55.625 }, 00:32:55.625 { 00:32:55.625 "id": 4, 00:32:55.625 "state": "FREE", 00:32:55.625 "validity": 0.0 00:32:55.625 }, 00:32:55.625 { 00:32:55.625 "id": 5, 00:32:55.625 "state": "FREE", 00:32:55.625 "validity": 0.0 00:32:55.625 }, 00:32:55.625 { 00:32:55.625 "id": 6, 00:32:55.625 "state": "FREE", 00:32:55.625 "validity": 0.0 00:32:55.625 }, 00:32:55.625 { 00:32:55.625 "id": 7, 00:32:55.625 "state": "FREE", 00:32:55.625 "validity": 0.0 00:32:55.625 }, 00:32:55.625 { 00:32:55.625 "id": 8, 00:32:55.625 "state": "FREE", 00:32:55.625 "validity": 0.0 00:32:55.625 }, 00:32:55.625 { 00:32:55.625 "id": 9, 00:32:55.625 "state": "FREE", 00:32:55.625 "validity": 0.0 00:32:55.625 }, 00:32:55.625 { 00:32:55.625 "id": 10, 00:32:55.625 "state": "FREE", 00:32:55.625 "validity": 0.0 00:32:55.625 }, 00:32:55.625 { 00:32:55.625 "id": 11, 00:32:55.625 "state": "FREE", 00:32:55.625 "validity": 0.0 00:32:55.625 }, 00:32:55.625 { 00:32:55.625 "id": 12, 00:32:55.625 "state": "FREE", 00:32:55.625 "validity": 0.0 00:32:55.625 }, 00:32:55.625 { 00:32:55.625 "id": 13, 00:32:55.625 "state": "FREE", 00:32:55.625 "validity": 0.0 00:32:55.625 }, 00:32:55.625 { 00:32:55.625 "id": 14, 00:32:55.625 "state": "FREE", 00:32:55.625 "validity": 0.0 00:32:55.625 }, 00:32:55.625 { 00:32:55.625 "id": 15, 00:32:55.625 "state": "FREE", 00:32:55.625 "validity": 0.0 00:32:55.625 }, 00:32:55.625 { 00:32:55.625 "id": 16, 00:32:55.625 "state": "FREE", 00:32:55.625 "validity": 0.0 00:32:55.625 }, 00:32:55.625 { 00:32:55.625 "id": 17, 00:32:55.625 "state": "FREE", 00:32:55.625 "validity": 0.0 00:32:55.625 } 00:32:55.625 ], 00:32:55.625 "read-only": true 00:32:55.625 }, 00:32:55.625 { 00:32:55.625 "name": "cache_device", 00:32:55.625 "type": "bdev", 00:32:55.625 "chunks": [ 00:32:55.625 { 00:32:55.625 "id": 0, 00:32:55.625 "state": "INACTIVE", 00:32:55.625 "utilization": 0.0 00:32:55.625 }, 00:32:55.625 { 00:32:55.625 "id": 1, 00:32:55.625 "state": "CLOSED", 00:32:55.625 "utilization": 1.0 00:32:55.625 }, 00:32:55.625 { 00:32:55.625 "id": 2, 00:32:55.625 "state": "CLOSED", 00:32:55.625 "utilization": 1.0 00:32:55.625 }, 00:32:55.625 { 00:32:55.625 "id": 3, 00:32:55.625 "state": "OPEN", 00:32:55.625 "utilization": 0.001953125 00:32:55.625 }, 00:32:55.625 { 00:32:55.625 "id": 4, 00:32:55.625 "state": "OPEN", 00:32:55.625 "utilization": 0.0 00:32:55.625 } 00:32:55.625 ], 00:32:55.625 "read-only": true 00:32:55.625 }, 00:32:55.625 { 00:32:55.625 "name": "verbose_mode", 00:32:55.625 "value": true, 00:32:55.625 "unit": "", 00:32:55.625 "desc": "In verbose mode, user is able to get access to additional advanced FTL properties" 00:32:55.625 }, 00:32:55.625 { 00:32:55.625 "name": "prep_upgrade_on_shutdown", 00:32:55.625 "value": true, 00:32:55.625 "unit": "", 00:32:55.625 "desc": "During shutdown, FTL executes all actions which are needed for upgrade to a new version" 00:32:55.625 } 00:32:55.625 ] 00:32:55.625 } 00:32:55.625 18:31:05 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@74 -- # tcp_target_shutdown 00:32:55.625 18:31:05 ftl.ftl_upgrade_shutdown -- ftl/common.sh@130 -- # [[ -n 83204 ]] 00:32:55.625 18:31:05 ftl.ftl_upgrade_shutdown -- ftl/common.sh@131 -- # killprocess 83204 00:32:55.625 18:31:05 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@954 -- # '[' -z 83204 ']' 00:32:55.625 18:31:05 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@958 -- # kill -0 83204 00:32:55.625 18:31:05 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@959 -- # uname 00:32:55.625 18:31:05 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:32:55.625 18:31:05 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 83204 00:32:55.625 killing process with pid 83204 00:32:55.625 18:31:06 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:32:55.625 18:31:06 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:32:55.626 18:31:06 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@972 -- # echo 'killing process with pid 83204' 00:32:55.626 18:31:06 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@973 -- # kill 83204 00:32:55.626 18:31:06 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@978 -- # wait 83204 00:32:57.003 [2024-12-06 18:31:07.137614] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl] FTL IO channel destroy on nvmf_tgt_poll_group_000 00:32:57.004 [2024-12-06 18:31:07.157749] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:32:57.004 [2024-12-06 18:31:07.157794] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Deinit core IO channel 00:32:57.004 [2024-12-06 18:31:07.157809] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.003 ms 00:32:57.004 [2024-12-06 18:31:07.157820] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:32:57.004 [2024-12-06 18:31:07.157843] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl] FTL IO channel destroy on app_thread 00:32:57.004 [2024-12-06 18:31:07.162054] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:32:57.004 [2024-12-06 18:31:07.162084] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Unregister IO device 00:32:57.004 [2024-12-06 18:31:07.162097] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 4.201 ms 00:32:57.004 [2024-12-06 18:31:07.162113] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:33:05.190 [2024-12-06 18:31:14.408084] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:33:05.190 [2024-12-06 18:31:14.408358] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Stop core poller 00:33:05.190 [2024-12-06 18:31:14.408387] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 7257.706 ms 00:33:05.190 [2024-12-06 18:31:14.408407] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:33:05.190 [2024-12-06 18:31:14.409395] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:33:05.190 [2024-12-06 18:31:14.409420] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Persist L2P 00:33:05.190 [2024-12-06 18:31:14.409433] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.964 ms 00:33:05.190 [2024-12-06 18:31:14.409443] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:33:05.190 [2024-12-06 18:31:14.410379] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:33:05.190 [2024-12-06 18:31:14.410409] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Finish L2P trims 00:33:05.190 [2024-12-06 18:31:14.410422] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.908 ms 00:33:05.190 [2024-12-06 18:31:14.410433] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:33:05.190 [2024-12-06 18:31:14.425908] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:33:05.190 [2024-12-06 18:31:14.425944] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Persist NV cache metadata 00:33:05.190 [2024-12-06 18:31:14.425958] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 15.456 ms 00:33:05.190 [2024-12-06 18:31:14.425968] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:33:05.190 [2024-12-06 18:31:14.435194] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:33:05.190 [2024-12-06 18:31:14.435234] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Persist valid map metadata 00:33:05.190 [2024-12-06 18:31:14.435248] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 9.204 ms 00:33:05.190 [2024-12-06 18:31:14.435258] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:33:05.190 [2024-12-06 18:31:14.435371] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:33:05.190 [2024-12-06 18:31:14.435386] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Persist P2L metadata 00:33:05.190 [2024-12-06 18:31:14.435403] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.060 ms 00:33:05.190 [2024-12-06 18:31:14.435413] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:33:05.190 [2024-12-06 18:31:14.449863] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:33:05.191 [2024-12-06 18:31:14.450025] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Persist band info metadata 00:33:05.191 [2024-12-06 18:31:14.450045] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 14.455 ms 00:33:05.191 [2024-12-06 18:31:14.450056] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:33:05.191 [2024-12-06 18:31:14.464512] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:33:05.191 [2024-12-06 18:31:14.464546] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Persist trim metadata 00:33:05.191 [2024-12-06 18:31:14.464558] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 14.442 ms 00:33:05.191 [2024-12-06 18:31:14.464567] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:33:05.191 [2024-12-06 18:31:14.479375] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:33:05.191 [2024-12-06 18:31:14.479517] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Persist superblock 00:33:05.191 [2024-12-06 18:31:14.479538] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 14.797 ms 00:33:05.191 [2024-12-06 18:31:14.479548] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:33:05.191 [2024-12-06 18:31:14.494246] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:33:05.191 [2024-12-06 18:31:14.494418] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Set FTL clean state 00:33:05.191 [2024-12-06 18:31:14.494439] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 14.622 ms 00:33:05.191 [2024-12-06 18:31:14.494449] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:33:05.191 [2024-12-06 18:31:14.494486] ftl_debug.c: 165:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Bands validity: 00:33:05.191 [2024-12-06 18:31:14.494513] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 1: 261120 / 261120 wr_cnt: 1 state: closed 00:33:05.191 [2024-12-06 18:31:14.494526] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 2: 261120 / 261120 wr_cnt: 1 state: closed 00:33:05.191 [2024-12-06 18:31:14.494537] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 3: 2048 / 261120 wr_cnt: 1 state: closed 00:33:05.191 [2024-12-06 18:31:14.494548] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 4: 0 / 261120 wr_cnt: 0 state: free 00:33:05.191 [2024-12-06 18:31:14.494559] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 5: 0 / 261120 wr_cnt: 0 state: free 00:33:05.191 [2024-12-06 18:31:14.494569] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 6: 0 / 261120 wr_cnt: 0 state: free 00:33:05.191 [2024-12-06 18:31:14.494580] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 7: 0 / 261120 wr_cnt: 0 state: free 00:33:05.191 [2024-12-06 18:31:14.494590] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 8: 0 / 261120 wr_cnt: 0 state: free 00:33:05.191 [2024-12-06 18:31:14.494601] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 9: 0 / 261120 wr_cnt: 0 state: free 00:33:05.191 [2024-12-06 18:31:14.494611] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 10: 0 / 261120 wr_cnt: 0 state: free 00:33:05.191 [2024-12-06 18:31:14.494622] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 11: 0 / 261120 wr_cnt: 0 state: free 00:33:05.191 [2024-12-06 18:31:14.494632] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 12: 0 / 261120 wr_cnt: 0 state: free 00:33:05.191 [2024-12-06 18:31:14.494642] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 13: 0 / 261120 wr_cnt: 0 state: free 00:33:05.191 [2024-12-06 18:31:14.494653] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 14: 0 / 261120 wr_cnt: 0 state: free 00:33:05.191 [2024-12-06 18:31:14.494663] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 15: 0 / 261120 wr_cnt: 0 state: free 00:33:05.191 [2024-12-06 18:31:14.494673] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 16: 0 / 261120 wr_cnt: 0 state: free 00:33:05.191 [2024-12-06 18:31:14.494683] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 17: 0 / 261120 wr_cnt: 0 state: free 00:33:05.191 [2024-12-06 18:31:14.494693] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 18: 0 / 261120 wr_cnt: 0 state: free 00:33:05.191 [2024-12-06 18:31:14.494706] ftl_debug.c: 211:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl] 00:33:05.191 [2024-12-06 18:31:14.494715] ftl_debug.c: 212:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl] device UUID: aaebc61a-a923-4f65-98d7-8a7f2859da96 00:33:05.191 [2024-12-06 18:31:14.494726] ftl_debug.c: 213:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl] total valid LBAs: 524288 00:33:05.191 [2024-12-06 18:31:14.494735] ftl_debug.c: 214:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl] total writes: 786752 00:33:05.191 [2024-12-06 18:31:14.494745] ftl_debug.c: 215:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl] user writes: 524288 00:33:05.191 [2024-12-06 18:31:14.494755] ftl_debug.c: 216:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl] WAF: 1.5006 00:33:05.191 [2024-12-06 18:31:14.494765] ftl_debug.c: 218:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl] limits: 00:33:05.191 [2024-12-06 18:31:14.494780] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl] crit: 0 00:33:05.191 [2024-12-06 18:31:14.494790] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl] high: 0 00:33:05.191 [2024-12-06 18:31:14.494800] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl] low: 0 00:33:05.191 [2024-12-06 18:31:14.494809] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl] start: 0 00:33:05.191 [2024-12-06 18:31:14.494820] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:33:05.191 [2024-12-06 18:31:14.494834] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Dump statistics 00:33:05.191 [2024-12-06 18:31:14.494844] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.335 ms 00:33:05.191 [2024-12-06 18:31:14.494854] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:33:05.191 [2024-12-06 18:31:14.514929] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:33:05.191 [2024-12-06 18:31:14.515069] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Deinitialize L2P 00:33:05.191 [2024-12-06 18:31:14.515189] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 20.076 ms 00:33:05.191 [2024-12-06 18:31:14.515233] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:33:05.191 [2024-12-06 18:31:14.515831] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:33:05.191 [2024-12-06 18:31:14.515931] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Deinitialize P2L checkpointing 00:33:05.191 [2024-12-06 18:31:14.516000] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.541 ms 00:33:05.191 [2024-12-06 18:31:14.516035] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:33:05.191 [2024-12-06 18:31:14.581676] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Rollback 00:33:05.191 [2024-12-06 18:31:14.581815] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize reloc 00:33:05.191 [2024-12-06 18:31:14.581903] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 00:33:05.191 [2024-12-06 18:31:14.581939] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:33:05.191 [2024-12-06 18:31:14.581991] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Rollback 00:33:05.191 [2024-12-06 18:31:14.582023] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize bands metadata 00:33:05.191 [2024-12-06 18:31:14.582053] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 00:33:05.191 [2024-12-06 18:31:14.582082] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:33:05.191 [2024-12-06 18:31:14.582197] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Rollback 00:33:05.191 [2024-12-06 18:31:14.582459] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize trim map 00:33:05.191 [2024-12-06 18:31:14.582500] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 00:33:05.191 [2024-12-06 18:31:14.582536] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:33:05.191 [2024-12-06 18:31:14.582581] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Rollback 00:33:05.191 [2024-12-06 18:31:14.582613] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize valid map 00:33:05.191 [2024-12-06 18:31:14.582643] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 00:33:05.191 [2024-12-06 18:31:14.582672] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:33:05.191 [2024-12-06 18:31:14.705604] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Rollback 00:33:05.191 [2024-12-06 18:31:14.705846] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize NV cache 00:33:05.191 [2024-12-06 18:31:14.705983] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 00:33:05.191 [2024-12-06 18:31:14.706028] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:33:05.191 [2024-12-06 18:31:14.806321] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Rollback 00:33:05.191 [2024-12-06 18:31:14.806509] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize metadata 00:33:05.191 [2024-12-06 18:31:14.806591] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 00:33:05.191 [2024-12-06 18:31:14.806627] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:33:05.191 [2024-12-06 18:31:14.806751] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Rollback 00:33:05.191 [2024-12-06 18:31:14.806787] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize core IO channel 00:33:05.191 [2024-12-06 18:31:14.806818] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 00:33:05.191 [2024-12-06 18:31:14.806848] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:33:05.191 [2024-12-06 18:31:14.806935] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Rollback 00:33:05.191 [2024-12-06 18:31:14.807079] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize bands 00:33:05.191 [2024-12-06 18:31:14.807172] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 00:33:05.191 [2024-12-06 18:31:14.807201] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:33:05.191 [2024-12-06 18:31:14.807345] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Rollback 00:33:05.191 [2024-12-06 18:31:14.807388] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize memory pools 00:33:05.191 [2024-12-06 18:31:14.807420] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 00:33:05.191 [2024-12-06 18:31:14.807525] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:33:05.191 [2024-12-06 18:31:14.807639] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Rollback 00:33:05.191 [2024-12-06 18:31:14.807679] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize superblock 00:33:05.191 [2024-12-06 18:31:14.807709] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 00:33:05.191 [2024-12-06 18:31:14.807739] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:33:05.191 [2024-12-06 18:31:14.807797] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Rollback 00:33:05.191 [2024-12-06 18:31:14.807829] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Open cache bdev 00:33:05.191 [2024-12-06 18:31:14.808038] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 00:33:05.191 [2024-12-06 18:31:14.808076] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:33:05.191 [2024-12-06 18:31:14.808158] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Rollback 00:33:05.191 [2024-12-06 18:31:14.808195] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Open base bdev 00:33:05.191 [2024-12-06 18:31:14.808300] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 00:33:05.191 [2024-12-06 18:31:14.808336] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:33:05.191 [2024-12-06 18:31:14.808491] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl] Management process finished, name 'FTL shutdown', duration = 7663.138 ms, result 0 00:33:08.475 18:31:18 ftl.ftl_upgrade_shutdown -- ftl/common.sh@132 -- # unset spdk_tgt_pid 00:33:08.475 18:31:18 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@75 -- # tcp_target_setup 00:33:08.475 18:31:18 ftl.ftl_upgrade_shutdown -- ftl/common.sh@81 -- # local base_bdev= 00:33:08.475 18:31:18 ftl.ftl_upgrade_shutdown -- ftl/common.sh@82 -- # local cache_bdev= 00:33:08.475 18:31:18 ftl.ftl_upgrade_shutdown -- ftl/common.sh@84 -- # [[ -f /home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json ]] 00:33:08.475 18:31:18 ftl.ftl_upgrade_shutdown -- ftl/common.sh@85 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt '--cpumask=[0]' --config=/home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json 00:33:08.475 18:31:18 ftl.ftl_upgrade_shutdown -- ftl/common.sh@89 -- # spdk_tgt_pid=83786 00:33:08.475 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:33:08.475 18:31:18 ftl.ftl_upgrade_shutdown -- ftl/common.sh@90 -- # export spdk_tgt_pid 00:33:08.475 18:31:18 ftl.ftl_upgrade_shutdown -- ftl/common.sh@91 -- # waitforlisten 83786 00:33:08.475 18:31:18 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@835 -- # '[' -z 83786 ']' 00:33:08.475 18:31:18 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:33:08.475 18:31:18 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@840 -- # local max_retries=100 00:33:08.475 18:31:18 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:33:08.475 18:31:18 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@844 -- # xtrace_disable 00:33:08.475 18:31:18 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@10 -- # set +x 00:33:08.475 [2024-12-06 18:31:18.505429] Starting SPDK v25.01-pre git sha1 1148849d6 / DPDK 24.03.0 initialization... 00:33:08.475 [2024-12-06 18:31:18.506130] [ DPDK EAL parameters: spdk_tgt --no-shconf -l 0 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid83786 ] 00:33:08.475 [2024-12-06 18:31:18.687098] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:33:08.475 [2024-12-06 18:31:18.800215] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:33:09.410 [2024-12-06 18:31:19.793806] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: cachen1 00:33:09.410 [2024-12-06 18:31:19.794032] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: cachen1 00:33:09.410 [2024-12-06 18:31:19.940927] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:33:09.410 [2024-12-06 18:31:19.940984] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Check configuration 00:33:09.410 [2024-12-06 18:31:19.941001] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.004 ms 00:33:09.410 [2024-12-06 18:31:19.941012] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:33:09.410 [2024-12-06 18:31:19.941069] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:33:09.410 [2024-12-06 18:31:19.941082] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Open base bdev 00:33:09.410 [2024-12-06 18:31:19.941093] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.036 ms 00:33:09.410 [2024-12-06 18:31:19.941103] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:33:09.410 [2024-12-06 18:31:19.941132] mngt/ftl_mngt_bdev.c: 196:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl] Using cachen1p0 as write buffer cache 00:33:09.410 [2024-12-06 18:31:19.942139] mngt/ftl_mngt_bdev.c: 236:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl] Using bdev as NV Cache device 00:33:09.410 [2024-12-06 18:31:19.942163] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:33:09.410 [2024-12-06 18:31:19.942174] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Open cache bdev 00:33:09.410 [2024-12-06 18:31:19.942185] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 1.043 ms 00:33:09.410 [2024-12-06 18:31:19.942195] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:33:09.410 [2024-12-06 18:31:19.943712] mngt/ftl_mngt_md.c: 455:ftl_mngt_load_sb: *NOTICE*: [FTL][ftl] SHM: clean 0, shm_clean 0 00:33:09.410 [2024-12-06 18:31:19.964239] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:33:09.410 [2024-12-06 18:31:19.964290] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Load super block 00:33:09.410 [2024-12-06 18:31:19.964313] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 20.560 ms 00:33:09.410 [2024-12-06 18:31:19.964324] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:33:09.410 [2024-12-06 18:31:19.964391] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:33:09.410 [2024-12-06 18:31:19.964403] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Validate super block 00:33:09.410 [2024-12-06 18:31:19.964433] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.025 ms 00:33:09.410 [2024-12-06 18:31:19.964443] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:33:09.410 [2024-12-06 18:31:19.971386] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:33:09.410 [2024-12-06 18:31:19.971558] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize memory pools 00:33:09.410 [2024-12-06 18:31:19.971579] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 6.871 ms 00:33:09.410 [2024-12-06 18:31:19.971590] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:33:09.410 [2024-12-06 18:31:19.971665] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:33:09.410 [2024-12-06 18:31:19.971679] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize bands 00:33:09.410 [2024-12-06 18:31:19.971690] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.048 ms 00:33:09.410 [2024-12-06 18:31:19.971700] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:33:09.410 [2024-12-06 18:31:19.971749] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:33:09.410 [2024-12-06 18:31:19.971765] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Register IO device 00:33:09.410 [2024-12-06 18:31:19.971776] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.011 ms 00:33:09.410 [2024-12-06 18:31:19.971786] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:33:09.410 [2024-12-06 18:31:19.971813] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl] FTL IO channel created on app_thread 00:33:09.410 [2024-12-06 18:31:19.976461] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:33:09.410 [2024-12-06 18:31:19.976494] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize core IO channel 00:33:09.410 [2024-12-06 18:31:19.976506] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 4.662 ms 00:33:09.410 [2024-12-06 18:31:19.976520] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:33:09.410 [2024-12-06 18:31:19.976549] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:33:09.410 [2024-12-06 18:31:19.976561] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Decorate bands 00:33:09.410 [2024-12-06 18:31:19.976572] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.004 ms 00:33:09.410 [2024-12-06 18:31:19.976582] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:33:09.410 [2024-12-06 18:31:19.976640] ftl_layout.c: 613:ftl_layout_setup: *NOTICE*: [FTL][ftl] FTL layout setup mode 0 00:33:09.410 [2024-12-06 18:31:19.976668] upgrade/ftl_sb_v5.c: 278:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl] nvc layout blob load 0x150 bytes 00:33:09.410 [2024-12-06 18:31:19.976702] upgrade/ftl_sb_v5.c: 287:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl] base layout blob load 0x48 bytes 00:33:09.410 [2024-12-06 18:31:19.976719] upgrade/ftl_sb_v5.c: 294:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl] layout blob load 0x190 bytes 00:33:09.410 [2024-12-06 18:31:19.976807] upgrade/ftl_sb_v5.c: 92:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl] nvc layout blob store 0x150 bytes 00:33:09.411 [2024-12-06 18:31:19.976821] upgrade/ftl_sb_v5.c: 101:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl] base layout blob store 0x48 bytes 00:33:09.411 [2024-12-06 18:31:19.976834] upgrade/ftl_sb_v5.c: 109:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl] layout blob store 0x190 bytes 00:33:09.411 [2024-12-06 18:31:19.976847] ftl_layout.c: 685:ftl_layout_setup: *NOTICE*: [FTL][ftl] Base device capacity: 20480.00 MiB 00:33:09.411 [2024-12-06 18:31:19.976859] ftl_layout.c: 687:ftl_layout_setup: *NOTICE*: [FTL][ftl] NV cache device capacity: 5120.00 MiB 00:33:09.411 [2024-12-06 18:31:19.976873] ftl_layout.c: 689:ftl_layout_setup: *NOTICE*: [FTL][ftl] L2P entries: 3774873 00:33:09.411 [2024-12-06 18:31:19.976884] ftl_layout.c: 690:ftl_layout_setup: *NOTICE*: [FTL][ftl] L2P address size: 4 00:33:09.411 [2024-12-06 18:31:19.976894] ftl_layout.c: 691:ftl_layout_setup: *NOTICE*: [FTL][ftl] P2L checkpoint pages: 2048 00:33:09.411 [2024-12-06 18:31:19.976904] ftl_layout.c: 692:ftl_layout_setup: *NOTICE*: [FTL][ftl] NV cache chunk count 5 00:33:09.411 [2024-12-06 18:31:19.976914] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:33:09.411 [2024-12-06 18:31:19.976925] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize layout 00:33:09.411 [2024-12-06 18:31:19.976935] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.278 ms 00:33:09.411 [2024-12-06 18:31:19.976945] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:33:09.411 [2024-12-06 18:31:19.977019] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:33:09.411 [2024-12-06 18:31:19.977031] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Verify layout 00:33:09.411 [2024-12-06 18:31:19.977044] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.056 ms 00:33:09.411 [2024-12-06 18:31:19.977054] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:33:09.411 [2024-12-06 18:31:19.977146] ftl_layout.c: 768:ftl_layout_dump: *NOTICE*: [FTL][ftl] NV cache layout: 00:33:09.411 [2024-12-06 18:31:19.977159] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region sb 00:33:09.411 [2024-12-06 18:31:19.977169] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 0.00 MiB 00:33:09.411 [2024-12-06 18:31:19.977179] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:33:09.411 [2024-12-06 18:31:19.977190] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region l2p 00:33:09.411 [2024-12-06 18:31:19.977200] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 0.12 MiB 00:33:09.411 [2024-12-06 18:31:19.977209] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 14.50 MiB 00:33:09.411 [2024-12-06 18:31:19.977219] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region band_md 00:33:09.411 [2024-12-06 18:31:19.977229] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 14.62 MiB 00:33:09.411 [2024-12-06 18:31:19.977239] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:33:09.411 [2024-12-06 18:31:19.977250] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region band_md_mirror 00:33:09.411 [2024-12-06 18:31:19.977259] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 14.75 MiB 00:33:09.411 [2024-12-06 18:31:19.977291] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:33:09.411 [2024-12-06 18:31:19.977301] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region nvc_md 00:33:09.411 [2024-12-06 18:31:19.977311] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 47.38 MiB 00:33:09.411 [2024-12-06 18:31:19.977320] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:33:09.411 [2024-12-06 18:31:19.977329] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region nvc_md_mirror 00:33:09.411 [2024-12-06 18:31:19.977339] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 47.50 MiB 00:33:09.411 [2024-12-06 18:31:19.977348] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:33:09.411 [2024-12-06 18:31:19.977358] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region p2l0 00:33:09.411 [2024-12-06 18:31:19.977367] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 14.88 MiB 00:33:09.411 [2024-12-06 18:31:19.977376] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 8.00 MiB 00:33:09.411 [2024-12-06 18:31:19.977385] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region p2l1 00:33:09.411 [2024-12-06 18:31:19.977406] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 22.88 MiB 00:33:09.411 [2024-12-06 18:31:19.977415] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 8.00 MiB 00:33:09.411 [2024-12-06 18:31:19.977424] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region p2l2 00:33:09.411 [2024-12-06 18:31:19.977433] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 30.88 MiB 00:33:09.411 [2024-12-06 18:31:19.977443] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 8.00 MiB 00:33:09.411 [2024-12-06 18:31:19.977452] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region p2l3 00:33:09.411 [2024-12-06 18:31:19.977462] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 38.88 MiB 00:33:09.411 [2024-12-06 18:31:19.977472] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 8.00 MiB 00:33:09.411 [2024-12-06 18:31:19.977481] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region trim_md 00:33:09.411 [2024-12-06 18:31:19.977490] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 46.88 MiB 00:33:09.411 [2024-12-06 18:31:19.977499] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:33:09.411 [2024-12-06 18:31:19.977509] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region trim_md_mirror 00:33:09.411 [2024-12-06 18:31:19.977518] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 47.00 MiB 00:33:09.411 [2024-12-06 18:31:19.977527] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:33:09.411 [2024-12-06 18:31:19.977536] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region trim_log 00:33:09.411 [2024-12-06 18:31:19.977545] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 47.12 MiB 00:33:09.411 [2024-12-06 18:31:19.977554] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:33:09.411 [2024-12-06 18:31:19.977562] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region trim_log_mirror 00:33:09.411 [2024-12-06 18:31:19.977572] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 47.25 MiB 00:33:09.411 [2024-12-06 18:31:19.977581] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:33:09.411 [2024-12-06 18:31:19.977591] ftl_layout.c: 775:ftl_layout_dump: *NOTICE*: [FTL][ftl] Base device layout: 00:33:09.411 [2024-12-06 18:31:19.977601] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region sb_mirror 00:33:09.411 [2024-12-06 18:31:19.977611] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 0.00 MiB 00:33:09.411 [2024-12-06 18:31:19.977620] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:33:09.411 [2024-12-06 18:31:19.977634] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region vmap 00:33:09.411 [2024-12-06 18:31:19.977644] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 18432.25 MiB 00:33:09.411 [2024-12-06 18:31:19.977653] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.88 MiB 00:33:09.411 [2024-12-06 18:31:19.977662] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region data_btm 00:33:09.411 [2024-12-06 18:31:19.977671] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 0.25 MiB 00:33:09.411 [2024-12-06 18:31:19.977680] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 18432.00 MiB 00:33:09.411 [2024-12-06 18:31:19.977691] upgrade/ftl_sb_v5.c: 408:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] SB metadata layout - nvc: 00:33:09.411 [2024-12-06 18:31:19.977704] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x0 ver:5 blk_offs:0x0 blk_sz:0x20 00:33:09.411 [2024-12-06 18:31:19.977716] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x2 ver:0 blk_offs:0x20 blk_sz:0xe80 00:33:09.411 [2024-12-06 18:31:19.977726] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x3 ver:2 blk_offs:0xea0 blk_sz:0x20 00:33:09.411 [2024-12-06 18:31:19.977736] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x4 ver:2 blk_offs:0xec0 blk_sz:0x20 00:33:09.411 [2024-12-06 18:31:19.977747] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xa ver:2 blk_offs:0xee0 blk_sz:0x800 00:33:09.411 [2024-12-06 18:31:19.977757] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xb ver:2 blk_offs:0x16e0 blk_sz:0x800 00:33:09.411 [2024-12-06 18:31:19.977767] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xc ver:2 blk_offs:0x1ee0 blk_sz:0x800 00:33:09.411 [2024-12-06 18:31:19.977777] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xd ver:2 blk_offs:0x26e0 blk_sz:0x800 00:33:09.411 [2024-12-06 18:31:19.977786] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xe ver:0 blk_offs:0x2ee0 blk_sz:0x20 00:33:09.411 [2024-12-06 18:31:19.977797] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xf ver:0 blk_offs:0x2f00 blk_sz:0x20 00:33:09.411 [2024-12-06 18:31:19.977806] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x10 ver:1 blk_offs:0x2f20 blk_sz:0x20 00:33:09.411 [2024-12-06 18:31:19.977817] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x11 ver:1 blk_offs:0x2f40 blk_sz:0x20 00:33:09.411 [2024-12-06 18:31:19.977826] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x6 ver:2 blk_offs:0x2f60 blk_sz:0x20 00:33:09.411 [2024-12-06 18:31:19.977836] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x7 ver:2 blk_offs:0x2f80 blk_sz:0x20 00:33:09.411 [2024-12-06 18:31:19.977846] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xfffffffe ver:0 blk_offs:0x2fa0 blk_sz:0x13d060 00:33:09.411 [2024-12-06 18:31:19.977858] upgrade/ftl_sb_v5.c: 422:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] SB metadata layout - base dev: 00:33:09.411 [2024-12-06 18:31:19.977868] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x1 ver:5 blk_offs:0x0 blk_sz:0x20 00:33:09.411 [2024-12-06 18:31:19.977879] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xfffffffe ver:0 blk_offs:0x20 blk_sz:0x20 00:33:09.411 [2024-12-06 18:31:19.977889] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x9 ver:0 blk_offs:0x40 blk_sz:0x480000 00:33:09.411 [2024-12-06 18:31:19.977899] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x5 ver:0 blk_offs:0x480040 blk_sz:0xe0 00:33:09.411 [2024-12-06 18:31:19.977910] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xfffffffe ver:0 blk_offs:0x480120 blk_sz:0x7fee0 00:33:09.411 [2024-12-06 18:31:19.977921] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:33:09.411 [2024-12-06 18:31:19.977931] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Layout upgrade 00:33:09.411 [2024-12-06 18:31:19.977941] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.831 ms 00:33:09.411 [2024-12-06 18:31:19.977951] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:33:09.411 [2024-12-06 18:31:19.977997] mngt/ftl_mngt_misc.c: 165:ftl_mngt_scrub_nv_cache: *NOTICE*: [FTL][ftl] NV cache data region needs scrubbing, this may take a while. 00:33:09.411 [2024-12-06 18:31:19.978009] mngt/ftl_mngt_misc.c: 166:ftl_mngt_scrub_nv_cache: *NOTICE*: [FTL][ftl] Scrubbing 5 chunks 00:33:12.700 [2024-12-06 18:31:23.215802] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:33:12.700 [2024-12-06 18:31:23.216103] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Scrub NV cache 00:33:12.700 [2024-12-06 18:31:23.216132] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 3243.061 ms 00:33:12.700 [2024-12-06 18:31:23.216144] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:33:12.700 [2024-12-06 18:31:23.257036] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:33:12.700 [2024-12-06 18:31:23.257095] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize metadata 00:33:12.700 [2024-12-06 18:31:23.257112] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 40.606 ms 00:33:12.700 [2024-12-06 18:31:23.257123] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:33:12.700 [2024-12-06 18:31:23.257248] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:33:12.700 [2024-12-06 18:31:23.257284] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize band addresses 00:33:12.700 [2024-12-06 18:31:23.257296] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.013 ms 00:33:12.700 [2024-12-06 18:31:23.257306] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:33:12.959 [2024-12-06 18:31:23.299116] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:33:12.959 [2024-12-06 18:31:23.299169] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize NV cache 00:33:12.959 [2024-12-06 18:31:23.299187] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 41.809 ms 00:33:12.959 [2024-12-06 18:31:23.299198] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:33:12.959 [2024-12-06 18:31:23.299261] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:33:12.959 [2024-12-06 18:31:23.299288] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize valid map 00:33:12.959 [2024-12-06 18:31:23.299299] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.003 ms 00:33:12.959 [2024-12-06 18:31:23.299309] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:33:12.959 [2024-12-06 18:31:23.299794] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:33:12.959 [2024-12-06 18:31:23.299809] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize trim map 00:33:12.959 [2024-12-06 18:31:23.299820] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.418 ms 00:33:12.959 [2024-12-06 18:31:23.299830] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:33:12.959 [2024-12-06 18:31:23.299878] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:33:12.959 [2024-12-06 18:31:23.299889] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize bands metadata 00:33:12.959 [2024-12-06 18:31:23.299899] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.019 ms 00:33:12.959 [2024-12-06 18:31:23.299910] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:33:12.959 [2024-12-06 18:31:23.318782] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:33:12.959 [2024-12-06 18:31:23.318826] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize reloc 00:33:12.959 [2024-12-06 18:31:23.318841] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 18.879 ms 00:33:12.959 [2024-12-06 18:31:23.318852] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:33:12.959 [2024-12-06 18:31:23.349956] ftl_nv_cache.c:1772:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl] FTL NV Cache: full chunks = 0, empty chunks = 4 00:33:12.959 [2024-12-06 18:31:23.350008] ftl_nv_cache.c:1776:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl] FTL NV Cache: state loaded successfully 00:33:12.959 [2024-12-06 18:31:23.350027] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:33:12.959 [2024-12-06 18:31:23.350039] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Restore NV cache metadata 00:33:12.959 [2024-12-06 18:31:23.350051] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 31.080 ms 00:33:12.959 [2024-12-06 18:31:23.350061] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:33:12.959 [2024-12-06 18:31:23.370937] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:33:12.960 [2024-12-06 18:31:23.370988] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Restore valid map metadata 00:33:12.960 [2024-12-06 18:31:23.371004] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 20.852 ms 00:33:12.960 [2024-12-06 18:31:23.371015] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:33:12.960 [2024-12-06 18:31:23.389869] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:33:12.960 [2024-12-06 18:31:23.389919] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Restore band info metadata 00:33:12.960 [2024-12-06 18:31:23.389934] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 18.825 ms 00:33:12.960 [2024-12-06 18:31:23.389944] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:33:12.960 [2024-12-06 18:31:23.408774] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:33:12.960 [2024-12-06 18:31:23.408972] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Restore trim metadata 00:33:12.960 [2024-12-06 18:31:23.408996] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 18.807 ms 00:33:12.960 [2024-12-06 18:31:23.409007] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:33:12.960 [2024-12-06 18:31:23.409852] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:33:12.960 [2024-12-06 18:31:23.409878] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize P2L checkpointing 00:33:12.960 [2024-12-06 18:31:23.409890] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.672 ms 00:33:12.960 [2024-12-06 18:31:23.409901] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:33:12.960 [2024-12-06 18:31:23.496184] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:33:12.960 [2024-12-06 18:31:23.496466] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Restore P2L checkpoints 00:33:12.960 [2024-12-06 18:31:23.496494] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 86.396 ms 00:33:12.960 [2024-12-06 18:31:23.496506] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:33:12.960 [2024-12-06 18:31:23.509590] ftl_l2p_cache.c: 458:ftl_l2p_cache_init: *NOTICE*: l2p maximum resident size is: 1 (of 2) MiB 00:33:12.960 [2024-12-06 18:31:23.510695] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:33:12.960 [2024-12-06 18:31:23.510727] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize L2P 00:33:12.960 [2024-12-06 18:31:23.510743] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 14.139 ms 00:33:12.960 [2024-12-06 18:31:23.510753] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:33:12.960 [2024-12-06 18:31:23.510981] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:33:12.960 [2024-12-06 18:31:23.511003] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Restore L2P 00:33:12.960 [2024-12-06 18:31:23.511015] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.007 ms 00:33:12.960 [2024-12-06 18:31:23.511026] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:33:12.960 [2024-12-06 18:31:23.511095] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:33:12.960 [2024-12-06 18:31:23.511107] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Finalize band initialization 00:33:12.960 [2024-12-06 18:31:23.511118] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.019 ms 00:33:12.960 [2024-12-06 18:31:23.511128] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:33:12.960 [2024-12-06 18:31:23.511166] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:33:12.960 [2024-12-06 18:31:23.511177] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Start core poller 00:33:12.960 [2024-12-06 18:31:23.511192] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.008 ms 00:33:12.960 [2024-12-06 18:31:23.511202] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:33:12.960 [2024-12-06 18:31:23.511237] mngt/ftl_mngt_self_test.c: 208:ftl_mngt_self_test: *NOTICE*: [FTL][ftl] Self test skipped 00:33:12.960 [2024-12-06 18:31:23.511249] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:33:12.960 [2024-12-06 18:31:23.511259] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Self test on startup 00:33:12.960 [2024-12-06 18:31:23.511291] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.013 ms 00:33:12.960 [2024-12-06 18:31:23.511301] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:33:13.219 [2024-12-06 18:31:23.548880] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:33:13.219 [2024-12-06 18:31:23.548941] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Set FTL dirty state 00:33:13.219 [2024-12-06 18:31:23.548957] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 37.612 ms 00:33:13.219 [2024-12-06 18:31:23.548968] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:33:13.219 [2024-12-06 18:31:23.549062] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:33:13.219 [2024-12-06 18:31:23.549075] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Finalize initialization 00:33:13.219 [2024-12-06 18:31:23.549086] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.035 ms 00:33:13.219 [2024-12-06 18:31:23.549096] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:33:13.219 [2024-12-06 18:31:23.550460] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl] Management process finished, name 'FTL startup', duration = 3614.872 ms, result 0 00:33:13.219 [2024-12-06 18:31:23.565281] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:33:13.219 [2024-12-06 18:31:23.581255] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl] FTL IO channel created on nvmf_tgt_poll_group_000 00:33:13.220 [2024-12-06 18:31:23.590840] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4420 *** 00:33:13.220 18:31:23 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:33:13.220 18:31:23 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@868 -- # return 0 00:33:13.220 18:31:23 ftl.ftl_upgrade_shutdown -- ftl/common.sh@93 -- # [[ -f /home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json ]] 00:33:13.220 18:31:23 ftl.ftl_upgrade_shutdown -- ftl/common.sh@95 -- # return 0 00:33:13.220 18:31:23 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@78 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_ftl_set_property -b ftl -p verbose_mode -v true 00:33:13.479 [2024-12-06 18:31:23.834603] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:33:13.479 [2024-12-06 18:31:23.834661] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Decode property 00:33:13.479 [2024-12-06 18:31:23.834682] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.007 ms 00:33:13.479 [2024-12-06 18:31:23.834693] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:33:13.479 [2024-12-06 18:31:23.834721] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:33:13.479 [2024-12-06 18:31:23.834733] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Set property 00:33:13.479 [2024-12-06 18:31:23.834744] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.002 ms 00:33:13.479 [2024-12-06 18:31:23.834754] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:33:13.479 [2024-12-06 18:31:23.834774] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:33:13.479 [2024-12-06 18:31:23.834785] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Property setting cleanup 00:33:13.479 [2024-12-06 18:31:23.834795] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.001 ms 00:33:13.479 [2024-12-06 18:31:23.834806] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:33:13.479 [2024-12-06 18:31:23.834872] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl] Management process finished, name 'Set FTL property', duration = 0.264 ms, result 0 00:33:13.479 true 00:33:13.479 18:31:23 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@79 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_ftl_get_properties -b ftl 00:33:13.738 { 00:33:13.738 "name": "ftl", 00:33:13.738 "properties": [ 00:33:13.738 { 00:33:13.738 "name": "superblock_version", 00:33:13.738 "value": 5, 00:33:13.738 "read-only": true 00:33:13.738 }, 00:33:13.738 { 00:33:13.738 "name": "base_device", 00:33:13.738 "bands": [ 00:33:13.738 { 00:33:13.738 "id": 0, 00:33:13.738 "state": "CLOSED", 00:33:13.738 "validity": 1.0 00:33:13.738 }, 00:33:13.738 { 00:33:13.738 "id": 1, 00:33:13.738 "state": "CLOSED", 00:33:13.738 "validity": 1.0 00:33:13.738 }, 00:33:13.738 { 00:33:13.738 "id": 2, 00:33:13.738 "state": "CLOSED", 00:33:13.738 "validity": 0.007843137254901933 00:33:13.738 }, 00:33:13.738 { 00:33:13.738 "id": 3, 00:33:13.738 "state": "FREE", 00:33:13.738 "validity": 0.0 00:33:13.738 }, 00:33:13.738 { 00:33:13.738 "id": 4, 00:33:13.738 "state": "FREE", 00:33:13.738 "validity": 0.0 00:33:13.738 }, 00:33:13.738 { 00:33:13.738 "id": 5, 00:33:13.738 "state": "FREE", 00:33:13.738 "validity": 0.0 00:33:13.738 }, 00:33:13.738 { 00:33:13.738 "id": 6, 00:33:13.738 "state": "FREE", 00:33:13.738 "validity": 0.0 00:33:13.738 }, 00:33:13.738 { 00:33:13.738 "id": 7, 00:33:13.738 "state": "FREE", 00:33:13.738 "validity": 0.0 00:33:13.738 }, 00:33:13.738 { 00:33:13.738 "id": 8, 00:33:13.738 "state": "FREE", 00:33:13.738 "validity": 0.0 00:33:13.738 }, 00:33:13.738 { 00:33:13.738 "id": 9, 00:33:13.738 "state": "FREE", 00:33:13.738 "validity": 0.0 00:33:13.738 }, 00:33:13.738 { 00:33:13.738 "id": 10, 00:33:13.738 "state": "FREE", 00:33:13.738 "validity": 0.0 00:33:13.738 }, 00:33:13.738 { 00:33:13.738 "id": 11, 00:33:13.738 "state": "FREE", 00:33:13.738 "validity": 0.0 00:33:13.738 }, 00:33:13.738 { 00:33:13.738 "id": 12, 00:33:13.738 "state": "FREE", 00:33:13.738 "validity": 0.0 00:33:13.738 }, 00:33:13.738 { 00:33:13.738 "id": 13, 00:33:13.738 "state": "FREE", 00:33:13.738 "validity": 0.0 00:33:13.738 }, 00:33:13.738 { 00:33:13.738 "id": 14, 00:33:13.738 "state": "FREE", 00:33:13.738 "validity": 0.0 00:33:13.738 }, 00:33:13.738 { 00:33:13.738 "id": 15, 00:33:13.738 "state": "FREE", 00:33:13.738 "validity": 0.0 00:33:13.738 }, 00:33:13.738 { 00:33:13.738 "id": 16, 00:33:13.738 "state": "FREE", 00:33:13.738 "validity": 0.0 00:33:13.738 }, 00:33:13.738 { 00:33:13.738 "id": 17, 00:33:13.738 "state": "FREE", 00:33:13.738 "validity": 0.0 00:33:13.738 } 00:33:13.738 ], 00:33:13.738 "read-only": true 00:33:13.738 }, 00:33:13.738 { 00:33:13.738 "name": "cache_device", 00:33:13.738 "type": "bdev", 00:33:13.738 "chunks": [ 00:33:13.738 { 00:33:13.738 "id": 0, 00:33:13.738 "state": "INACTIVE", 00:33:13.738 "utilization": 0.0 00:33:13.738 }, 00:33:13.738 { 00:33:13.738 "id": 1, 00:33:13.738 "state": "OPEN", 00:33:13.738 "utilization": 0.0 00:33:13.738 }, 00:33:13.738 { 00:33:13.738 "id": 2, 00:33:13.738 "state": "OPEN", 00:33:13.738 "utilization": 0.0 00:33:13.738 }, 00:33:13.738 { 00:33:13.738 "id": 3, 00:33:13.738 "state": "FREE", 00:33:13.738 "utilization": 0.0 00:33:13.738 }, 00:33:13.738 { 00:33:13.738 "id": 4, 00:33:13.738 "state": "FREE", 00:33:13.738 "utilization": 0.0 00:33:13.738 } 00:33:13.738 ], 00:33:13.738 "read-only": true 00:33:13.738 }, 00:33:13.738 { 00:33:13.738 "name": "verbose_mode", 00:33:13.738 "value": true, 00:33:13.738 "unit": "", 00:33:13.738 "desc": "In verbose mode, user is able to get access to additional advanced FTL properties" 00:33:13.738 }, 00:33:13.738 { 00:33:13.738 "name": "prep_upgrade_on_shutdown", 00:33:13.738 "value": false, 00:33:13.738 "unit": "", 00:33:13.738 "desc": "During shutdown, FTL executes all actions which are needed for upgrade to a new version" 00:33:13.738 } 00:33:13.738 ] 00:33:13.738 } 00:33:13.738 18:31:24 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@82 -- # ftl_get_properties 00:33:13.738 18:31:24 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@82 -- # jq '[.properties[] | select(.name == "cache_device") | .chunks[] | select(.utilization != 0.0)] | length' 00:33:13.738 18:31:24 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@59 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_ftl_get_properties -b ftl 00:33:13.997 18:31:24 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@82 -- # used=0 00:33:13.997 18:31:24 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@83 -- # [[ 0 -ne 0 ]] 00:33:13.997 18:31:24 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@89 -- # ftl_get_properties 00:33:13.997 18:31:24 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@59 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_ftl_get_properties -b ftl 00:33:13.997 18:31:24 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@89 -- # jq '[.properties[] | select(.name == "bands") | .bands[] | select(.state == "OPENED")] | length' 00:33:14.255 Validate MD5 checksum, iteration 1 00:33:14.255 18:31:24 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@89 -- # opened=0 00:33:14.255 18:31:24 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@90 -- # [[ 0 -ne 0 ]] 00:33:14.255 18:31:24 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@111 -- # test_validate_checksum 00:33:14.255 18:31:24 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@96 -- # skip=0 00:33:14.255 18:31:24 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@97 -- # (( i = 0 )) 00:33:14.255 18:31:24 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@97 -- # (( i < iterations )) 00:33:14.255 18:31:24 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@98 -- # echo 'Validate MD5 checksum, iteration 1' 00:33:14.255 18:31:24 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@99 -- # tcp_dd --ib=ftln1 --of=/home/vagrant/spdk_repo/spdk/test/ftl/file --bs=1048576 --count=1024 --qd=2 --skip=0 00:33:14.255 18:31:24 ftl.ftl_upgrade_shutdown -- ftl/common.sh@198 -- # tcp_initiator_setup 00:33:14.256 18:31:24 ftl.ftl_upgrade_shutdown -- ftl/common.sh@151 -- # local 'rpc=/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.tgt.sock' 00:33:14.256 18:31:24 ftl.ftl_upgrade_shutdown -- ftl/common.sh@153 -- # [[ -f /home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json ]] 00:33:14.256 18:31:24 ftl.ftl_upgrade_shutdown -- ftl/common.sh@154 -- # return 0 00:33:14.256 18:31:24 ftl.ftl_upgrade_shutdown -- ftl/common.sh@199 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd '--cpumask=[1]' --rpc-socket=/var/tmp/spdk.tgt.sock --json=/home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json --ib=ftln1 --of=/home/vagrant/spdk_repo/spdk/test/ftl/file --bs=1048576 --count=1024 --qd=2 --skip=0 00:33:14.256 [2024-12-06 18:31:24.665843] Starting SPDK v25.01-pre git sha1 1148849d6 / DPDK 24.03.0 initialization... 00:33:14.256 [2024-12-06 18:31:24.666411] [ DPDK EAL parameters: spdk_dd --no-shconf -l 1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid83863 ] 00:33:14.515 [2024-12-06 18:31:24.847862] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:33:14.515 [2024-12-06 18:31:24.970258] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:33:16.412  [2024-12-06T18:31:27.250Z] Copying: 705/1024 [MB] (705 MBps) [2024-12-06T18:31:28.632Z] Copying: 1024/1024 [MB] (average 697 MBps) 00:33:18.056 00:33:18.056 18:31:28 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@100 -- # skip=1024 00:33:18.056 18:31:28 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@102 -- # md5sum /home/vagrant/spdk_repo/spdk/test/ftl/file 00:33:19.962 18:31:30 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@103 -- # cut -f1 '-d ' 00:33:19.962 Validate MD5 checksum, iteration 2 00:33:19.962 18:31:30 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@103 -- # sum=1644d2601499acc57cc829717d8638b6 00:33:19.962 18:31:30 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@105 -- # [[ 1644d2601499acc57cc829717d8638b6 != \1\6\4\4\d\2\6\0\1\4\9\9\a\c\c\5\7\c\c\8\2\9\7\1\7\d\8\6\3\8\b\6 ]] 00:33:19.962 18:31:30 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@97 -- # (( i++ )) 00:33:19.962 18:31:30 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@97 -- # (( i < iterations )) 00:33:19.962 18:31:30 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@98 -- # echo 'Validate MD5 checksum, iteration 2' 00:33:19.962 18:31:30 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@99 -- # tcp_dd --ib=ftln1 --of=/home/vagrant/spdk_repo/spdk/test/ftl/file --bs=1048576 --count=1024 --qd=2 --skip=1024 00:33:19.962 18:31:30 ftl.ftl_upgrade_shutdown -- ftl/common.sh@198 -- # tcp_initiator_setup 00:33:19.962 18:31:30 ftl.ftl_upgrade_shutdown -- ftl/common.sh@151 -- # local 'rpc=/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.tgt.sock' 00:33:19.962 18:31:30 ftl.ftl_upgrade_shutdown -- ftl/common.sh@153 -- # [[ -f /home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json ]] 00:33:19.962 18:31:30 ftl.ftl_upgrade_shutdown -- ftl/common.sh@154 -- # return 0 00:33:19.962 18:31:30 ftl.ftl_upgrade_shutdown -- ftl/common.sh@199 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd '--cpumask=[1]' --rpc-socket=/var/tmp/spdk.tgt.sock --json=/home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json --ib=ftln1 --of=/home/vagrant/spdk_repo/spdk/test/ftl/file --bs=1048576 --count=1024 --qd=2 --skip=1024 00:33:19.962 [2024-12-06 18:31:30.383894] Starting SPDK v25.01-pre git sha1 1148849d6 / DPDK 24.03.0 initialization... 00:33:19.962 [2024-12-06 18:31:30.384055] [ DPDK EAL parameters: spdk_dd --no-shconf -l 1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid83924 ] 00:33:20.221 [2024-12-06 18:31:30.578812] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:33:20.221 [2024-12-06 18:31:30.698145] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:33:22.125  [2024-12-06T18:31:32.959Z] Copying: 727/1024 [MB] (727 MBps) [2024-12-06T18:31:36.239Z] Copying: 1024/1024 [MB] (average 725 MBps) 00:33:25.663 00:33:25.663 18:31:35 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@100 -- # skip=2048 00:33:25.663 18:31:35 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@102 -- # md5sum /home/vagrant/spdk_repo/spdk/test/ftl/file 00:33:27.039 18:31:37 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@103 -- # cut -f1 '-d ' 00:33:27.039 18:31:37 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@103 -- # sum=f001df28c6d6d2eaf228d1b870c19689 00:33:27.039 18:31:37 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@105 -- # [[ f001df28c6d6d2eaf228d1b870c19689 != \f\0\0\1\d\f\2\8\c\6\d\6\d\2\e\a\f\2\2\8\d\1\b\8\7\0\c\1\9\6\8\9 ]] 00:33:27.039 18:31:37 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@97 -- # (( i++ )) 00:33:27.039 18:31:37 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@97 -- # (( i < iterations )) 00:33:27.040 18:31:37 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@114 -- # tcp_target_shutdown_dirty 00:33:27.040 18:31:37 ftl.ftl_upgrade_shutdown -- ftl/common.sh@137 -- # [[ -n 83786 ]] 00:33:27.040 18:31:37 ftl.ftl_upgrade_shutdown -- ftl/common.sh@138 -- # kill -9 83786 00:33:27.040 18:31:37 ftl.ftl_upgrade_shutdown -- ftl/common.sh@139 -- # unset spdk_tgt_pid 00:33:27.040 18:31:37 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@115 -- # tcp_target_setup 00:33:27.040 18:31:37 ftl.ftl_upgrade_shutdown -- ftl/common.sh@81 -- # local base_bdev= 00:33:27.040 18:31:37 ftl.ftl_upgrade_shutdown -- ftl/common.sh@82 -- # local cache_bdev= 00:33:27.040 18:31:37 ftl.ftl_upgrade_shutdown -- ftl/common.sh@84 -- # [[ -f /home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json ]] 00:33:27.040 18:31:37 ftl.ftl_upgrade_shutdown -- ftl/common.sh@89 -- # spdk_tgt_pid=84002 00:33:27.040 18:31:37 ftl.ftl_upgrade_shutdown -- ftl/common.sh@90 -- # export spdk_tgt_pid 00:33:27.040 18:31:37 ftl.ftl_upgrade_shutdown -- ftl/common.sh@85 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt '--cpumask=[0]' --config=/home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json 00:33:27.040 18:31:37 ftl.ftl_upgrade_shutdown -- ftl/common.sh@91 -- # waitforlisten 84002 00:33:27.040 18:31:37 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@835 -- # '[' -z 84002 ']' 00:33:27.040 18:31:37 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:33:27.040 18:31:37 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@840 -- # local max_retries=100 00:33:27.040 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:33:27.040 18:31:37 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:33:27.040 18:31:37 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@844 -- # xtrace_disable 00:33:27.040 18:31:37 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@10 -- # set +x 00:33:27.299 [2024-12-06 18:31:37.627455] Starting SPDK v25.01-pre git sha1 1148849d6 / DPDK 24.03.0 initialization... 00:33:27.299 [2024-12-06 18:31:37.627588] [ DPDK EAL parameters: spdk_tgt --no-shconf -l 0 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid84002 ] 00:33:27.299 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 834: 83786 Killed $spdk_tgt_bin "--cpumask=$spdk_tgt_cpumask" --config="$spdk_tgt_cnfg" 00:33:27.299 [2024-12-06 18:31:37.808976] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:33:27.558 [2024-12-06 18:31:37.920084] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:33:28.496 [2024-12-06 18:31:38.905532] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: cachen1 00:33:28.496 [2024-12-06 18:31:38.905603] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: cachen1 00:33:28.496 [2024-12-06 18:31:39.051795] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:33:28.496 [2024-12-06 18:31:39.051999] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Check configuration 00:33:28.496 [2024-12-06 18:31:39.052023] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.004 ms 00:33:28.496 [2024-12-06 18:31:39.052035] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:33:28.496 [2024-12-06 18:31:39.052103] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:33:28.496 [2024-12-06 18:31:39.052116] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Open base bdev 00:33:28.496 [2024-12-06 18:31:39.052128] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.040 ms 00:33:28.496 [2024-12-06 18:31:39.052138] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:33:28.496 [2024-12-06 18:31:39.052168] mngt/ftl_mngt_bdev.c: 196:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl] Using cachen1p0 as write buffer cache 00:33:28.496 [2024-12-06 18:31:39.053200] mngt/ftl_mngt_bdev.c: 236:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl] Using bdev as NV Cache device 00:33:28.496 [2024-12-06 18:31:39.053227] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:33:28.496 [2024-12-06 18:31:39.053238] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Open cache bdev 00:33:28.496 [2024-12-06 18:31:39.053249] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 1.071 ms 00:33:28.496 [2024-12-06 18:31:39.053258] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:33:28.496 [2024-12-06 18:31:39.053662] mngt/ftl_mngt_md.c: 455:ftl_mngt_load_sb: *NOTICE*: [FTL][ftl] SHM: clean 0, shm_clean 0 00:33:28.764 [2024-12-06 18:31:39.078437] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:33:28.764 [2024-12-06 18:31:39.078592] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Load super block 00:33:28.764 [2024-12-06 18:31:39.078615] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 24.815 ms 00:33:28.764 [2024-12-06 18:31:39.078626] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:33:28.764 [2024-12-06 18:31:39.092871] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:33:28.764 [2024-12-06 18:31:39.092911] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Validate super block 00:33:28.764 [2024-12-06 18:31:39.092925] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.026 ms 00:33:28.764 [2024-12-06 18:31:39.092935] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:33:28.764 [2024-12-06 18:31:39.093440] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:33:28.764 [2024-12-06 18:31:39.093456] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize memory pools 00:33:28.764 [2024-12-06 18:31:39.093467] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.426 ms 00:33:28.764 [2024-12-06 18:31:39.093476] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:33:28.764 [2024-12-06 18:31:39.093536] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:33:28.764 [2024-12-06 18:31:39.093549] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize bands 00:33:28.764 [2024-12-06 18:31:39.093560] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.038 ms 00:33:28.764 [2024-12-06 18:31:39.093570] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:33:28.764 [2024-12-06 18:31:39.093595] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:33:28.764 [2024-12-06 18:31:39.093605] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Register IO device 00:33:28.764 [2024-12-06 18:31:39.093616] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.005 ms 00:33:28.764 [2024-12-06 18:31:39.093625] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:33:28.764 [2024-12-06 18:31:39.093646] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl] FTL IO channel created on app_thread 00:33:28.764 [2024-12-06 18:31:39.097813] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:33:28.764 [2024-12-06 18:31:39.097844] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize core IO channel 00:33:28.764 [2024-12-06 18:31:39.097856] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 4.179 ms 00:33:28.764 [2024-12-06 18:31:39.097866] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:33:28.764 [2024-12-06 18:31:39.097902] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:33:28.764 [2024-12-06 18:31:39.097913] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Decorate bands 00:33:28.764 [2024-12-06 18:31:39.097923] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.004 ms 00:33:28.764 [2024-12-06 18:31:39.097933] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:33:28.764 [2024-12-06 18:31:39.097968] ftl_layout.c: 613:ftl_layout_setup: *NOTICE*: [FTL][ftl] FTL layout setup mode 0 00:33:28.764 [2024-12-06 18:31:39.097991] upgrade/ftl_sb_v5.c: 278:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl] nvc layout blob load 0x150 bytes 00:33:28.764 [2024-12-06 18:31:39.098024] upgrade/ftl_sb_v5.c: 287:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl] base layout blob load 0x48 bytes 00:33:28.764 [2024-12-06 18:31:39.098044] upgrade/ftl_sb_v5.c: 294:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl] layout blob load 0x190 bytes 00:33:28.764 [2024-12-06 18:31:39.098130] upgrade/ftl_sb_v5.c: 92:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl] nvc layout blob store 0x150 bytes 00:33:28.764 [2024-12-06 18:31:39.098143] upgrade/ftl_sb_v5.c: 101:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl] base layout blob store 0x48 bytes 00:33:28.764 [2024-12-06 18:31:39.098156] upgrade/ftl_sb_v5.c: 109:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl] layout blob store 0x190 bytes 00:33:28.764 [2024-12-06 18:31:39.098168] ftl_layout.c: 685:ftl_layout_setup: *NOTICE*: [FTL][ftl] Base device capacity: 20480.00 MiB 00:33:28.764 [2024-12-06 18:31:39.098179] ftl_layout.c: 687:ftl_layout_setup: *NOTICE*: [FTL][ftl] NV cache device capacity: 5120.00 MiB 00:33:28.764 [2024-12-06 18:31:39.098190] ftl_layout.c: 689:ftl_layout_setup: *NOTICE*: [FTL][ftl] L2P entries: 3774873 00:33:28.764 [2024-12-06 18:31:39.098200] ftl_layout.c: 690:ftl_layout_setup: *NOTICE*: [FTL][ftl] L2P address size: 4 00:33:28.764 [2024-12-06 18:31:39.098209] ftl_layout.c: 691:ftl_layout_setup: *NOTICE*: [FTL][ftl] P2L checkpoint pages: 2048 00:33:28.764 [2024-12-06 18:31:39.098218] ftl_layout.c: 692:ftl_layout_setup: *NOTICE*: [FTL][ftl] NV cache chunk count 5 00:33:28.764 [2024-12-06 18:31:39.098231] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:33:28.764 [2024-12-06 18:31:39.098241] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize layout 00:33:28.764 [2024-12-06 18:31:39.098251] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.266 ms 00:33:28.764 [2024-12-06 18:31:39.098261] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:33:28.764 [2024-12-06 18:31:39.098350] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:33:28.764 [2024-12-06 18:31:39.098361] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Verify layout 00:33:28.764 [2024-12-06 18:31:39.098371] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.055 ms 00:33:28.764 [2024-12-06 18:31:39.098380] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:33:28.764 [2024-12-06 18:31:39.098476] ftl_layout.c: 768:ftl_layout_dump: *NOTICE*: [FTL][ftl] NV cache layout: 00:33:28.764 [2024-12-06 18:31:39.098492] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region sb 00:33:28.764 [2024-12-06 18:31:39.098502] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 0.00 MiB 00:33:28.764 [2024-12-06 18:31:39.098511] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:33:28.764 [2024-12-06 18:31:39.098523] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region l2p 00:33:28.764 [2024-12-06 18:31:39.098533] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 0.12 MiB 00:33:28.764 [2024-12-06 18:31:39.098542] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 14.50 MiB 00:33:28.764 [2024-12-06 18:31:39.098551] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region band_md 00:33:28.764 [2024-12-06 18:31:39.098562] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 14.62 MiB 00:33:28.764 [2024-12-06 18:31:39.098571] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:33:28.764 [2024-12-06 18:31:39.098581] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region band_md_mirror 00:33:28.764 [2024-12-06 18:31:39.098590] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 14.75 MiB 00:33:28.764 [2024-12-06 18:31:39.098599] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:33:28.764 [2024-12-06 18:31:39.098608] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region nvc_md 00:33:28.764 [2024-12-06 18:31:39.098617] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 47.38 MiB 00:33:28.764 [2024-12-06 18:31:39.098626] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:33:28.764 [2024-12-06 18:31:39.098635] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region nvc_md_mirror 00:33:28.764 [2024-12-06 18:31:39.098644] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 47.50 MiB 00:33:28.764 [2024-12-06 18:31:39.098653] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:33:28.764 [2024-12-06 18:31:39.098662] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region p2l0 00:33:28.764 [2024-12-06 18:31:39.098671] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 14.88 MiB 00:33:28.764 [2024-12-06 18:31:39.098690] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 8.00 MiB 00:33:28.764 [2024-12-06 18:31:39.098699] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region p2l1 00:33:28.764 [2024-12-06 18:31:39.098709] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 22.88 MiB 00:33:28.764 [2024-12-06 18:31:39.098718] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 8.00 MiB 00:33:28.764 [2024-12-06 18:31:39.098726] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region p2l2 00:33:28.764 [2024-12-06 18:31:39.098735] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 30.88 MiB 00:33:28.764 [2024-12-06 18:31:39.098745] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 8.00 MiB 00:33:28.764 [2024-12-06 18:31:39.098753] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region p2l3 00:33:28.764 [2024-12-06 18:31:39.098762] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 38.88 MiB 00:33:28.764 [2024-12-06 18:31:39.098771] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 8.00 MiB 00:33:28.764 [2024-12-06 18:31:39.098780] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region trim_md 00:33:28.764 [2024-12-06 18:31:39.098789] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 46.88 MiB 00:33:28.764 [2024-12-06 18:31:39.098798] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:33:28.764 [2024-12-06 18:31:39.098807] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region trim_md_mirror 00:33:28.764 [2024-12-06 18:31:39.098816] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 47.00 MiB 00:33:28.764 [2024-12-06 18:31:39.098825] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:33:28.764 [2024-12-06 18:31:39.098834] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region trim_log 00:33:28.764 [2024-12-06 18:31:39.098843] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 47.12 MiB 00:33:28.764 [2024-12-06 18:31:39.098852] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:33:28.764 [2024-12-06 18:31:39.098861] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region trim_log_mirror 00:33:28.764 [2024-12-06 18:31:39.098870] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 47.25 MiB 00:33:28.764 [2024-12-06 18:31:39.098879] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:33:28.764 [2024-12-06 18:31:39.098888] ftl_layout.c: 775:ftl_layout_dump: *NOTICE*: [FTL][ftl] Base device layout: 00:33:28.765 [2024-12-06 18:31:39.098898] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region sb_mirror 00:33:28.765 [2024-12-06 18:31:39.098908] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 0.00 MiB 00:33:28.765 [2024-12-06 18:31:39.098917] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:33:28.765 [2024-12-06 18:31:39.098927] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region vmap 00:33:28.765 [2024-12-06 18:31:39.098936] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 18432.25 MiB 00:33:28.765 [2024-12-06 18:31:39.098945] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.88 MiB 00:33:28.765 [2024-12-06 18:31:39.098954] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region data_btm 00:33:28.765 [2024-12-06 18:31:39.098964] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 0.25 MiB 00:33:28.765 [2024-12-06 18:31:39.098973] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 18432.00 MiB 00:33:28.765 [2024-12-06 18:31:39.098984] upgrade/ftl_sb_v5.c: 408:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] SB metadata layout - nvc: 00:33:28.765 [2024-12-06 18:31:39.098996] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x0 ver:5 blk_offs:0x0 blk_sz:0x20 00:33:28.765 [2024-12-06 18:31:39.099007] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x2 ver:0 blk_offs:0x20 blk_sz:0xe80 00:33:28.765 [2024-12-06 18:31:39.099017] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x3 ver:2 blk_offs:0xea0 blk_sz:0x20 00:33:28.765 [2024-12-06 18:31:39.099027] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x4 ver:2 blk_offs:0xec0 blk_sz:0x20 00:33:28.765 [2024-12-06 18:31:39.099038] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xa ver:2 blk_offs:0xee0 blk_sz:0x800 00:33:28.765 [2024-12-06 18:31:39.099048] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xb ver:2 blk_offs:0x16e0 blk_sz:0x800 00:33:28.765 [2024-12-06 18:31:39.099059] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xc ver:2 blk_offs:0x1ee0 blk_sz:0x800 00:33:28.765 [2024-12-06 18:31:39.099069] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xd ver:2 blk_offs:0x26e0 blk_sz:0x800 00:33:28.765 [2024-12-06 18:31:39.099079] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xe ver:0 blk_offs:0x2ee0 blk_sz:0x20 00:33:28.765 [2024-12-06 18:31:39.099089] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xf ver:0 blk_offs:0x2f00 blk_sz:0x20 00:33:28.765 [2024-12-06 18:31:39.099099] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x10 ver:1 blk_offs:0x2f20 blk_sz:0x20 00:33:28.765 [2024-12-06 18:31:39.099109] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x11 ver:1 blk_offs:0x2f40 blk_sz:0x20 00:33:28.765 [2024-12-06 18:31:39.099119] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x6 ver:2 blk_offs:0x2f60 blk_sz:0x20 00:33:28.765 [2024-12-06 18:31:39.099129] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x7 ver:2 blk_offs:0x2f80 blk_sz:0x20 00:33:28.765 [2024-12-06 18:31:39.099140] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xfffffffe ver:0 blk_offs:0x2fa0 blk_sz:0x13d060 00:33:28.765 [2024-12-06 18:31:39.099150] upgrade/ftl_sb_v5.c: 422:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] SB metadata layout - base dev: 00:33:28.765 [2024-12-06 18:31:39.099162] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x1 ver:5 blk_offs:0x0 blk_sz:0x20 00:33:28.765 [2024-12-06 18:31:39.099177] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xfffffffe ver:0 blk_offs:0x20 blk_sz:0x20 00:33:28.765 [2024-12-06 18:31:39.099188] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x9 ver:0 blk_offs:0x40 blk_sz:0x480000 00:33:28.765 [2024-12-06 18:31:39.099199] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x5 ver:0 blk_offs:0x480040 blk_sz:0xe0 00:33:28.765 [2024-12-06 18:31:39.099209] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xfffffffe ver:0 blk_offs:0x480120 blk_sz:0x7fee0 00:33:28.765 [2024-12-06 18:31:39.099220] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:33:28.765 [2024-12-06 18:31:39.099231] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Layout upgrade 00:33:28.765 [2024-12-06 18:31:39.099241] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.799 ms 00:33:28.765 [2024-12-06 18:31:39.099250] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:33:28.765 [2024-12-06 18:31:39.132440] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:33:28.765 [2024-12-06 18:31:39.132586] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize metadata 00:33:28.765 [2024-12-06 18:31:39.132674] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 33.183 ms 00:33:28.765 [2024-12-06 18:31:39.132710] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:33:28.765 [2024-12-06 18:31:39.132771] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:33:28.765 [2024-12-06 18:31:39.132804] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize band addresses 00:33:28.765 [2024-12-06 18:31:39.132834] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.011 ms 00:33:28.765 [2024-12-06 18:31:39.132864] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:33:28.765 [2024-12-06 18:31:39.172906] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:33:28.765 [2024-12-06 18:31:39.173050] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize NV cache 00:33:28.765 [2024-12-06 18:31:39.173123] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 40.028 ms 00:33:28.765 [2024-12-06 18:31:39.173157] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:33:28.765 [2024-12-06 18:31:39.173216] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:33:28.765 [2024-12-06 18:31:39.173248] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize valid map 00:33:28.765 [2024-12-06 18:31:39.173299] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.003 ms 00:33:28.765 [2024-12-06 18:31:39.173336] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:33:28.765 [2024-12-06 18:31:39.173484] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:33:28.765 [2024-12-06 18:31:39.173607] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize trim map 00:33:28.765 [2024-12-06 18:31:39.173718] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.056 ms 00:33:28.765 [2024-12-06 18:31:39.173748] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:33:28.765 [2024-12-06 18:31:39.173812] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:33:28.765 [2024-12-06 18:31:39.173844] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize bands metadata 00:33:28.765 [2024-12-06 18:31:39.173874] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.019 ms 00:33:28.765 [2024-12-06 18:31:39.173902] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:33:28.765 [2024-12-06 18:31:39.195337] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:33:28.765 [2024-12-06 18:31:39.195473] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize reloc 00:33:28.765 [2024-12-06 18:31:39.195575] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 21.422 ms 00:33:28.765 [2024-12-06 18:31:39.195617] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:33:28.765 [2024-12-06 18:31:39.195761] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:33:28.765 [2024-12-06 18:31:39.195811] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize recovery 00:33:28.765 [2024-12-06 18:31:39.195907] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.012 ms 00:33:28.765 [2024-12-06 18:31:39.195942] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:33:28.765 [2024-12-06 18:31:39.232162] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:33:28.765 [2024-12-06 18:31:39.232329] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Recover band state 00:33:28.765 [2024-12-06 18:31:39.232411] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 36.234 ms 00:33:28.765 [2024-12-06 18:31:39.232447] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:33:28.765 [2024-12-06 18:31:39.247383] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:33:28.765 [2024-12-06 18:31:39.247514] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize P2L checkpointing 00:33:28.765 [2024-12-06 18:31:39.247597] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.646 ms 00:33:28.765 [2024-12-06 18:31:39.247631] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:33:28.765 [2024-12-06 18:31:39.330580] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:33:28.765 [2024-12-06 18:31:39.330806] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Restore P2L checkpoints 00:33:28.765 [2024-12-06 18:31:39.330913] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 82.999 ms 00:33:28.765 [2024-12-06 18:31:39.330952] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:33:28.765 [2024-12-06 18:31:39.331151] mngt/ftl_mngt_recovery.c: 596:p2l_ckpt_preprocess: *NOTICE*: [FTL][ftl] P2L ckpt_id=0 found seq_id=8 00:33:28.765 [2024-12-06 18:31:39.331417] mngt/ftl_mngt_recovery.c: 596:p2l_ckpt_preprocess: *NOTICE*: [FTL][ftl] P2L ckpt_id=1 found seq_id=9 00:33:28.765 [2024-12-06 18:31:39.331611] mngt/ftl_mngt_recovery.c: 596:p2l_ckpt_preprocess: *NOTICE*: [FTL][ftl] P2L ckpt_id=2 found seq_id=12 00:33:28.765 [2024-12-06 18:31:39.331837] mngt/ftl_mngt_recovery.c: 596:p2l_ckpt_preprocess: *NOTICE*: [FTL][ftl] P2L ckpt_id=3 found seq_id=0 00:33:28.765 [2024-12-06 18:31:39.331896] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:33:28.765 [2024-12-06 18:31:39.331929] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Preprocess P2L checkpoints 00:33:28.765 [2024-12-06 18:31:39.332024] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.858 ms 00:33:28.765 [2024-12-06 18:31:39.332102] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:33:28.765 [2024-12-06 18:31:39.332218] mngt/ftl_mngt_recovery.c: 650:ftl_mngt_recovery_open_bands_p2l: *NOTICE*: [FTL][ftl] No more open bands to recover from P2L 00:33:28.765 [2024-12-06 18:31:39.332298] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:33:28.765 [2024-12-06 18:31:39.332335] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Recover open bands P2L 00:33:28.765 [2024-12-06 18:31:39.332365] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.080 ms 00:33:28.765 [2024-12-06 18:31:39.332454] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:33:29.025 [2024-12-06 18:31:39.353996] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:33:29.025 [2024-12-06 18:31:39.354154] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Recover chunk state 00:33:29.025 [2024-12-06 18:31:39.354304] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 21.521 ms 00:33:29.025 [2024-12-06 18:31:39.354344] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:33:29.025 [2024-12-06 18:31:39.368494] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:33:29.025 [2024-12-06 18:31:39.368623] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Recover max seq ID 00:33:29.025 [2024-12-06 18:31:39.368691] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.010 ms 00:33:29.025 [2024-12-06 18:31:39.368725] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:33:29.025 [2024-12-06 18:31:39.368841] ftl_nv_cache.c:2274:recover_open_chunk_prepare: *NOTICE*: [FTL][ftl] Start recovery open chunk, offset = 262144, seq id 14 00:33:29.025 [2024-12-06 18:31:39.369069] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:33:29.025 [2024-12-06 18:31:39.369104] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Chunk recovery, prepare 00:33:29.025 [2024-12-06 18:31:39.369134] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.230 ms 00:33:29.025 [2024-12-06 18:31:39.369163] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:33:29.593 [2024-12-06 18:31:39.922684] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:33:29.593 [2024-12-06 18:31:39.922877] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Chunk recovery, read vss 00:33:29.593 [2024-12-06 18:31:39.922905] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 553.147 ms 00:33:29.593 [2024-12-06 18:31:39.922918] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:33:29.593 [2024-12-06 18:31:39.928690] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:33:29.593 [2024-12-06 18:31:39.928734] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Chunk recovery, persist P2L map 00:33:29.593 [2024-12-06 18:31:39.928748] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 1.163 ms 00:33:29.593 [2024-12-06 18:31:39.928760] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:33:29.593 [2024-12-06 18:31:39.929110] ftl_nv_cache.c:2323:recover_open_chunk_close_chunk_cb: *NOTICE*: [FTL][ftl] Recovered chunk, offset = 262144, seq id 14 00:33:29.593 [2024-12-06 18:31:39.929135] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:33:29.593 [2024-12-06 18:31:39.929146] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Chunk recovery, close chunk 00:33:29.594 [2024-12-06 18:31:39.929158] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.333 ms 00:33:29.594 [2024-12-06 18:31:39.929168] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:33:29.594 [2024-12-06 18:31:39.929196] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:33:29.594 [2024-12-06 18:31:39.929208] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Chunk recovery, cleanup 00:33:29.594 [2024-12-06 18:31:39.929218] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.003 ms 00:33:29.594 [2024-12-06 18:31:39.929233] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:33:29.594 [2024-12-06 18:31:39.929282] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl] Management process finished, name 'Recover open chunk', duration = 561.337 ms, result 0 00:33:29.594 [2024-12-06 18:31:39.929323] ftl_nv_cache.c:2274:recover_open_chunk_prepare: *NOTICE*: [FTL][ftl] Start recovery open chunk, offset = 524288, seq id 15 00:33:29.594 [2024-12-06 18:31:39.929400] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:33:29.594 [2024-12-06 18:31:39.929410] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Chunk recovery, prepare 00:33:29.594 [2024-12-06 18:31:39.929420] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.078 ms 00:33:29.594 [2024-12-06 18:31:39.929429] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:33:30.162 [2024-12-06 18:31:40.475646] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:33:30.162 [2024-12-06 18:31:40.475716] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Chunk recovery, read vss 00:33:30.162 [2024-12-06 18:31:40.475749] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 545.983 ms 00:33:30.162 [2024-12-06 18:31:40.475760] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:33:30.162 [2024-12-06 18:31:40.481315] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:33:30.162 [2024-12-06 18:31:40.481470] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Chunk recovery, persist P2L map 00:33:30.162 [2024-12-06 18:31:40.481492] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 1.084 ms 00:33:30.162 [2024-12-06 18:31:40.481502] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:33:30.162 [2024-12-06 18:31:40.481900] ftl_nv_cache.c:2323:recover_open_chunk_close_chunk_cb: *NOTICE*: [FTL][ftl] Recovered chunk, offset = 524288, seq id 15 00:33:30.162 [2024-12-06 18:31:40.481922] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:33:30.162 [2024-12-06 18:31:40.481932] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Chunk recovery, close chunk 00:33:30.162 [2024-12-06 18:31:40.481943] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.378 ms 00:33:30.162 [2024-12-06 18:31:40.481953] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:33:30.162 [2024-12-06 18:31:40.481981] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:33:30.162 [2024-12-06 18:31:40.481993] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Chunk recovery, cleanup 00:33:30.162 [2024-12-06 18:31:40.482003] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.002 ms 00:33:30.162 [2024-12-06 18:31:40.482013] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:33:30.162 [2024-12-06 18:31:40.482050] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl] Management process finished, name 'Recover open chunk', duration = 553.621 ms, result 0 00:33:30.162 [2024-12-06 18:31:40.482091] ftl_nv_cache.c:1772:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl] FTL NV Cache: full chunks = 2, empty chunks = 2 00:33:30.162 [2024-12-06 18:31:40.482103] ftl_nv_cache.c:1776:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl] FTL NV Cache: state loaded successfully 00:33:30.162 [2024-12-06 18:31:40.482116] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:33:30.162 [2024-12-06 18:31:40.482127] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Recover open chunks P2L 00:33:30.163 [2024-12-06 18:31:40.482138] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 1115.108 ms 00:33:30.163 [2024-12-06 18:31:40.482147] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:33:30.163 [2024-12-06 18:31:40.482176] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:33:30.163 [2024-12-06 18:31:40.482192] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Deinitialize recovery 00:33:30.163 [2024-12-06 18:31:40.482203] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.001 ms 00:33:30.163 [2024-12-06 18:31:40.482212] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:33:30.163 [2024-12-06 18:31:40.493841] ftl_l2p_cache.c: 458:ftl_l2p_cache_init: *NOTICE*: l2p maximum resident size is: 1 (of 2) MiB 00:33:30.163 [2024-12-06 18:31:40.493978] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:33:30.163 [2024-12-06 18:31:40.493992] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize L2P 00:33:30.163 [2024-12-06 18:31:40.494004] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 11.768 ms 00:33:30.163 [2024-12-06 18:31:40.494014] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:33:30.163 [2024-12-06 18:31:40.494630] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:33:30.163 [2024-12-06 18:31:40.494650] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Restore L2P from shared memory 00:33:30.163 [2024-12-06 18:31:40.494666] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.545 ms 00:33:30.163 [2024-12-06 18:31:40.494677] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:33:30.163 [2024-12-06 18:31:40.496678] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:33:30.163 [2024-12-06 18:31:40.496809] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Restore valid maps counters 00:33:30.163 [2024-12-06 18:31:40.496828] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 1.985 ms 00:33:30.163 [2024-12-06 18:31:40.496839] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:33:30.163 [2024-12-06 18:31:40.496885] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:33:30.163 [2024-12-06 18:31:40.496896] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Complete trim transaction 00:33:30.163 [2024-12-06 18:31:40.496907] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.002 ms 00:33:30.163 [2024-12-06 18:31:40.496922] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:33:30.163 [2024-12-06 18:31:40.497019] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:33:30.163 [2024-12-06 18:31:40.497031] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Finalize band initialization 00:33:30.163 [2024-12-06 18:31:40.497042] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.015 ms 00:33:30.163 [2024-12-06 18:31:40.497051] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:33:30.163 [2024-12-06 18:31:40.497072] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:33:30.163 [2024-12-06 18:31:40.497083] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Start core poller 00:33:30.163 [2024-12-06 18:31:40.497093] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.004 ms 00:33:30.163 [2024-12-06 18:31:40.497102] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:33:30.163 [2024-12-06 18:31:40.497137] mngt/ftl_mngt_self_test.c: 208:ftl_mngt_self_test: *NOTICE*: [FTL][ftl] Self test skipped 00:33:30.163 [2024-12-06 18:31:40.497148] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:33:30.163 [2024-12-06 18:31:40.497158] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Self test on startup 00:33:30.163 [2024-12-06 18:31:40.497168] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.013 ms 00:33:30.163 [2024-12-06 18:31:40.497179] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:33:30.163 [2024-12-06 18:31:40.497228] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:33:30.163 [2024-12-06 18:31:40.497239] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Finalize initialization 00:33:30.163 [2024-12-06 18:31:40.497249] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.028 ms 00:33:30.163 [2024-12-06 18:31:40.497259] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:33:30.163 [2024-12-06 18:31:40.498214] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl] Management process finished, name 'FTL startup', duration = 1448.358 ms, result 0 00:33:30.163 [2024-12-06 18:31:40.510558] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:33:30.163 [2024-12-06 18:31:40.526546] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl] FTL IO channel created on nvmf_tgt_poll_group_000 00:33:30.163 [2024-12-06 18:31:40.535942] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4420 *** 00:33:30.163 Validate MD5 checksum, iteration 1 00:33:30.163 18:31:40 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:33:30.163 18:31:40 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@868 -- # return 0 00:33:30.163 18:31:40 ftl.ftl_upgrade_shutdown -- ftl/common.sh@93 -- # [[ -f /home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json ]] 00:33:30.163 18:31:40 ftl.ftl_upgrade_shutdown -- ftl/common.sh@95 -- # return 0 00:33:30.163 18:31:40 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@116 -- # test_validate_checksum 00:33:30.163 18:31:40 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@96 -- # skip=0 00:33:30.163 18:31:40 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@97 -- # (( i = 0 )) 00:33:30.163 18:31:40 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@97 -- # (( i < iterations )) 00:33:30.163 18:31:40 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@98 -- # echo 'Validate MD5 checksum, iteration 1' 00:33:30.163 18:31:40 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@99 -- # tcp_dd --ib=ftln1 --of=/home/vagrant/spdk_repo/spdk/test/ftl/file --bs=1048576 --count=1024 --qd=2 --skip=0 00:33:30.163 18:31:40 ftl.ftl_upgrade_shutdown -- ftl/common.sh@198 -- # tcp_initiator_setup 00:33:30.163 18:31:40 ftl.ftl_upgrade_shutdown -- ftl/common.sh@151 -- # local 'rpc=/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.tgt.sock' 00:33:30.163 18:31:40 ftl.ftl_upgrade_shutdown -- ftl/common.sh@153 -- # [[ -f /home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json ]] 00:33:30.163 18:31:40 ftl.ftl_upgrade_shutdown -- ftl/common.sh@154 -- # return 0 00:33:30.163 18:31:40 ftl.ftl_upgrade_shutdown -- ftl/common.sh@199 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd '--cpumask=[1]' --rpc-socket=/var/tmp/spdk.tgt.sock --json=/home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json --ib=ftln1 --of=/home/vagrant/spdk_repo/spdk/test/ftl/file --bs=1048576 --count=1024 --qd=2 --skip=0 00:33:30.163 [2024-12-06 18:31:40.671839] Starting SPDK v25.01-pre git sha1 1148849d6 / DPDK 24.03.0 initialization... 00:33:30.163 [2024-12-06 18:31:40.672259] [ DPDK EAL parameters: spdk_dd --no-shconf -l 1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid84037 ] 00:33:30.422 [2024-12-06 18:31:40.853051] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:33:30.422 [2024-12-06 18:31:40.966254] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:33:32.331  [2024-12-06T18:31:43.166Z] Copying: 728/1024 [MB] (728 MBps) [2024-12-06T18:31:44.601Z] Copying: 1024/1024 [MB] (average 719 MBps) 00:33:34.025 00:33:34.025 18:31:44 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@100 -- # skip=1024 00:33:34.025 18:31:44 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@102 -- # md5sum /home/vagrant/spdk_repo/spdk/test/ftl/file 00:33:35.933 18:31:46 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@103 -- # cut -f1 '-d ' 00:33:35.933 Validate MD5 checksum, iteration 2 00:33:35.933 18:31:46 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@103 -- # sum=1644d2601499acc57cc829717d8638b6 00:33:35.933 18:31:46 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@105 -- # [[ 1644d2601499acc57cc829717d8638b6 != \1\6\4\4\d\2\6\0\1\4\9\9\a\c\c\5\7\c\c\8\2\9\7\1\7\d\8\6\3\8\b\6 ]] 00:33:35.933 18:31:46 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@97 -- # (( i++ )) 00:33:35.933 18:31:46 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@97 -- # (( i < iterations )) 00:33:35.933 18:31:46 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@98 -- # echo 'Validate MD5 checksum, iteration 2' 00:33:35.933 18:31:46 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@99 -- # tcp_dd --ib=ftln1 --of=/home/vagrant/spdk_repo/spdk/test/ftl/file --bs=1048576 --count=1024 --qd=2 --skip=1024 00:33:35.933 18:31:46 ftl.ftl_upgrade_shutdown -- ftl/common.sh@198 -- # tcp_initiator_setup 00:33:35.933 18:31:46 ftl.ftl_upgrade_shutdown -- ftl/common.sh@151 -- # local 'rpc=/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.tgt.sock' 00:33:35.933 18:31:46 ftl.ftl_upgrade_shutdown -- ftl/common.sh@153 -- # [[ -f /home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json ]] 00:33:35.933 18:31:46 ftl.ftl_upgrade_shutdown -- ftl/common.sh@154 -- # return 0 00:33:35.933 18:31:46 ftl.ftl_upgrade_shutdown -- ftl/common.sh@199 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd '--cpumask=[1]' --rpc-socket=/var/tmp/spdk.tgt.sock --json=/home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json --ib=ftln1 --of=/home/vagrant/spdk_repo/spdk/test/ftl/file --bs=1048576 --count=1024 --qd=2 --skip=1024 00:33:35.933 [2024-12-06 18:31:46.370695] Starting SPDK v25.01-pre git sha1 1148849d6 / DPDK 24.03.0 initialization... 00:33:35.933 [2024-12-06 18:31:46.370994] [ DPDK EAL parameters: spdk_dd --no-shconf -l 1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid84104 ] 00:33:36.192 [2024-12-06 18:31:46.552418] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:33:36.192 [2024-12-06 18:31:46.695126] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:33:38.103  [2024-12-06T18:31:48.939Z] Copying: 730/1024 [MB] (730 MBps) [2024-12-06T18:31:50.318Z] Copying: 1024/1024 [MB] (average 715 MBps) 00:33:39.742 00:33:39.742 18:31:50 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@100 -- # skip=2048 00:33:39.742 18:31:50 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@102 -- # md5sum /home/vagrant/spdk_repo/spdk/test/ftl/file 00:33:41.645 18:31:51 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@103 -- # cut -f1 '-d ' 00:33:41.645 18:31:51 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@103 -- # sum=f001df28c6d6d2eaf228d1b870c19689 00:33:41.645 18:31:51 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@105 -- # [[ f001df28c6d6d2eaf228d1b870c19689 != \f\0\0\1\d\f\2\8\c\6\d\6\d\2\e\a\f\2\2\8\d\1\b\8\7\0\c\1\9\6\8\9 ]] 00:33:41.645 18:31:51 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@97 -- # (( i++ )) 00:33:41.645 18:31:51 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@97 -- # (( i < iterations )) 00:33:41.645 18:31:51 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@118 -- # trap - SIGINT SIGTERM EXIT 00:33:41.645 18:31:51 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@119 -- # cleanup 00:33:41.645 18:31:51 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@11 -- # trap - SIGINT SIGTERM EXIT 00:33:41.645 18:31:51 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@12 -- # rm -f /home/vagrant/spdk_repo/spdk/test/ftl/file 00:33:41.645 18:31:51 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@13 -- # rm -f /home/vagrant/spdk_repo/spdk/test/ftl/file.md5 00:33:41.645 18:31:51 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@14 -- # tcp_cleanup 00:33:41.645 18:31:51 ftl.ftl_upgrade_shutdown -- ftl/common.sh@193 -- # tcp_target_cleanup 00:33:41.645 18:31:51 ftl.ftl_upgrade_shutdown -- ftl/common.sh@144 -- # tcp_target_shutdown 00:33:41.645 18:31:51 ftl.ftl_upgrade_shutdown -- ftl/common.sh@130 -- # [[ -n 84002 ]] 00:33:41.645 18:31:51 ftl.ftl_upgrade_shutdown -- ftl/common.sh@131 -- # killprocess 84002 00:33:41.645 18:31:51 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@954 -- # '[' -z 84002 ']' 00:33:41.645 18:31:51 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@958 -- # kill -0 84002 00:33:41.645 18:31:51 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@959 -- # uname 00:33:41.645 18:31:51 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:33:41.645 18:31:51 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 84002 00:33:41.645 killing process with pid 84002 00:33:41.645 18:31:51 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:33:41.645 18:31:51 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:33:41.645 18:31:51 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@972 -- # echo 'killing process with pid 84002' 00:33:41.645 18:31:51 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@973 -- # kill 84002 00:33:41.645 18:31:51 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@978 -- # wait 84002 00:33:42.579 [2024-12-06 18:31:53.062378] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl] FTL IO channel destroy on nvmf_tgt_poll_group_000 00:33:42.579 [2024-12-06 18:31:53.082703] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:33:42.579 [2024-12-06 18:31:53.082746] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Deinit core IO channel 00:33:42.580 [2024-12-06 18:31:53.082762] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.003 ms 00:33:42.580 [2024-12-06 18:31:53.082773] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:33:42.580 [2024-12-06 18:31:53.082796] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl] FTL IO channel destroy on app_thread 00:33:42.580 [2024-12-06 18:31:53.086865] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:33:42.580 [2024-12-06 18:31:53.086900] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Unregister IO device 00:33:42.580 [2024-12-06 18:31:53.086912] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 4.059 ms 00:33:42.580 [2024-12-06 18:31:53.086921] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:33:42.580 [2024-12-06 18:31:53.087126] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:33:42.580 [2024-12-06 18:31:53.087139] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Stop core poller 00:33:42.580 [2024-12-06 18:31:53.087150] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.181 ms 00:33:42.580 [2024-12-06 18:31:53.087160] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:33:42.580 [2024-12-06 18:31:53.088493] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:33:42.580 [2024-12-06 18:31:53.088528] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Persist L2P 00:33:42.580 [2024-12-06 18:31:53.088541] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 1.318 ms 00:33:42.580 [2024-12-06 18:31:53.088555] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:33:42.580 [2024-12-06 18:31:53.089483] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:33:42.580 [2024-12-06 18:31:53.089641] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Finish L2P trims 00:33:42.580 [2024-12-06 18:31:53.089661] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.895 ms 00:33:42.580 [2024-12-06 18:31:53.089672] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:33:42.580 [2024-12-06 18:31:53.104767] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:33:42.580 [2024-12-06 18:31:53.104903] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Persist NV cache metadata 00:33:42.580 [2024-12-06 18:31:53.104930] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 15.080 ms 00:33:42.580 [2024-12-06 18:31:53.104941] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:33:42.580 [2024-12-06 18:31:53.113081] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:33:42.580 [2024-12-06 18:31:53.113118] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Persist valid map metadata 00:33:42.580 [2024-12-06 18:31:53.113131] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 8.045 ms 00:33:42.580 [2024-12-06 18:31:53.113142] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:33:42.580 [2024-12-06 18:31:53.113237] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:33:42.580 [2024-12-06 18:31:53.113250] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Persist P2L metadata 00:33:42.580 [2024-12-06 18:31:53.113262] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.059 ms 00:33:42.580 [2024-12-06 18:31:53.113292] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:33:42.580 [2024-12-06 18:31:53.128122] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:33:42.580 [2024-12-06 18:31:53.128260] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Persist band info metadata 00:33:42.580 [2024-12-06 18:31:53.128288] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 14.836 ms 00:33:42.580 [2024-12-06 18:31:53.128298] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:33:42.580 [2024-12-06 18:31:53.143102] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:33:42.580 [2024-12-06 18:31:53.143137] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Persist trim metadata 00:33:42.580 [2024-12-06 18:31:53.143149] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 14.791 ms 00:33:42.580 [2024-12-06 18:31:53.143158] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:33:42.839 [2024-12-06 18:31:53.157339] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:33:42.839 [2024-12-06 18:31:53.157486] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Persist superblock 00:33:42.839 [2024-12-06 18:31:53.157506] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 14.170 ms 00:33:42.839 [2024-12-06 18:31:53.157515] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:33:42.839 [2024-12-06 18:31:53.172038] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:33:42.839 [2024-12-06 18:31:53.172072] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Set FTL clean state 00:33:42.839 [2024-12-06 18:31:53.172085] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 14.458 ms 00:33:42.839 [2024-12-06 18:31:53.172094] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:33:42.840 [2024-12-06 18:31:53.172128] ftl_debug.c: 165:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Bands validity: 00:33:42.840 [2024-12-06 18:31:53.172144] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 1: 261120 / 261120 wr_cnt: 1 state: closed 00:33:42.840 [2024-12-06 18:31:53.172157] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 2: 261120 / 261120 wr_cnt: 1 state: closed 00:33:42.840 [2024-12-06 18:31:53.172168] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 3: 2048 / 261120 wr_cnt: 1 state: closed 00:33:42.840 [2024-12-06 18:31:53.172179] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 4: 0 / 261120 wr_cnt: 0 state: free 00:33:42.840 [2024-12-06 18:31:53.172190] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 5: 0 / 261120 wr_cnt: 0 state: free 00:33:42.840 [2024-12-06 18:31:53.172200] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 6: 0 / 261120 wr_cnt: 0 state: free 00:33:42.840 [2024-12-06 18:31:53.172210] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 7: 0 / 261120 wr_cnt: 0 state: free 00:33:42.840 [2024-12-06 18:31:53.172220] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 8: 0 / 261120 wr_cnt: 0 state: free 00:33:42.840 [2024-12-06 18:31:53.172231] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 9: 0 / 261120 wr_cnt: 0 state: free 00:33:42.840 [2024-12-06 18:31:53.172242] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 10: 0 / 261120 wr_cnt: 0 state: free 00:33:42.840 [2024-12-06 18:31:53.172252] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 11: 0 / 261120 wr_cnt: 0 state: free 00:33:42.840 [2024-12-06 18:31:53.172262] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 12: 0 / 261120 wr_cnt: 0 state: free 00:33:42.840 [2024-12-06 18:31:53.172288] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 13: 0 / 261120 wr_cnt: 0 state: free 00:33:42.840 [2024-12-06 18:31:53.172299] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 14: 0 / 261120 wr_cnt: 0 state: free 00:33:42.840 [2024-12-06 18:31:53.172310] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 15: 0 / 261120 wr_cnt: 0 state: free 00:33:42.840 [2024-12-06 18:31:53.172320] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 16: 0 / 261120 wr_cnt: 0 state: free 00:33:42.840 [2024-12-06 18:31:53.172351] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 17: 0 / 261120 wr_cnt: 0 state: free 00:33:42.840 [2024-12-06 18:31:53.172362] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 18: 0 / 261120 wr_cnt: 0 state: free 00:33:42.840 [2024-12-06 18:31:53.172374] ftl_debug.c: 211:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl] 00:33:42.840 [2024-12-06 18:31:53.172384] ftl_debug.c: 212:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl] device UUID: aaebc61a-a923-4f65-98d7-8a7f2859da96 00:33:42.840 [2024-12-06 18:31:53.172395] ftl_debug.c: 213:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl] total valid LBAs: 524288 00:33:42.840 [2024-12-06 18:31:53.172405] ftl_debug.c: 214:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl] total writes: 320 00:33:42.840 [2024-12-06 18:31:53.172414] ftl_debug.c: 215:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl] user writes: 0 00:33:42.840 [2024-12-06 18:31:53.172424] ftl_debug.c: 216:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl] WAF: inf 00:33:42.840 [2024-12-06 18:31:53.172434] ftl_debug.c: 218:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl] limits: 00:33:42.840 [2024-12-06 18:31:53.172444] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl] crit: 0 00:33:42.840 [2024-12-06 18:31:53.172459] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl] high: 0 00:33:42.840 [2024-12-06 18:31:53.172468] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl] low: 0 00:33:42.840 [2024-12-06 18:31:53.172476] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl] start: 0 00:33:42.840 [2024-12-06 18:31:53.172487] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:33:42.840 [2024-12-06 18:31:53.172497] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Dump statistics 00:33:42.840 [2024-12-06 18:31:53.172508] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.360 ms 00:33:42.840 [2024-12-06 18:31:53.172517] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:33:42.840 [2024-12-06 18:31:53.192910] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:33:42.840 [2024-12-06 18:31:53.192942] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Deinitialize L2P 00:33:42.840 [2024-12-06 18:31:53.192954] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 20.396 ms 00:33:42.840 [2024-12-06 18:31:53.192964] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:33:42.840 [2024-12-06 18:31:53.193534] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:33:42.840 [2024-12-06 18:31:53.193546] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Deinitialize P2L checkpointing 00:33:42.840 [2024-12-06 18:31:53.193556] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.543 ms 00:33:42.840 [2024-12-06 18:31:53.193566] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:33:42.840 [2024-12-06 18:31:53.259133] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Rollback 00:33:42.840 [2024-12-06 18:31:53.259172] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize reloc 00:33:42.840 [2024-12-06 18:31:53.259185] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 00:33:42.840 [2024-12-06 18:31:53.259201] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:33:42.840 [2024-12-06 18:31:53.259235] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Rollback 00:33:42.840 [2024-12-06 18:31:53.259245] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize bands metadata 00:33:42.840 [2024-12-06 18:31:53.259256] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 00:33:42.840 [2024-12-06 18:31:53.259287] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:33:42.840 [2024-12-06 18:31:53.259364] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Rollback 00:33:42.840 [2024-12-06 18:31:53.259378] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize trim map 00:33:42.840 [2024-12-06 18:31:53.259388] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 00:33:42.840 [2024-12-06 18:31:53.259398] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:33:42.840 [2024-12-06 18:31:53.259421] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Rollback 00:33:42.840 [2024-12-06 18:31:53.259431] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize valid map 00:33:42.840 [2024-12-06 18:31:53.259442] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 00:33:42.840 [2024-12-06 18:31:53.259451] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:33:42.840 [2024-12-06 18:31:53.382771] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Rollback 00:33:42.840 [2024-12-06 18:31:53.382819] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize NV cache 00:33:42.840 [2024-12-06 18:31:53.382834] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 00:33:42.840 [2024-12-06 18:31:53.382844] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:33:43.099 [2024-12-06 18:31:53.484412] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Rollback 00:33:43.099 [2024-12-06 18:31:53.484460] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize metadata 00:33:43.099 [2024-12-06 18:31:53.484474] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 00:33:43.099 [2024-12-06 18:31:53.484485] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:33:43.099 [2024-12-06 18:31:53.484587] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Rollback 00:33:43.099 [2024-12-06 18:31:53.484599] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize core IO channel 00:33:43.099 [2024-12-06 18:31:53.484610] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 00:33:43.099 [2024-12-06 18:31:53.484620] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:33:43.099 [2024-12-06 18:31:53.484678] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Rollback 00:33:43.099 [2024-12-06 18:31:53.484703] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize bands 00:33:43.099 [2024-12-06 18:31:53.484713] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 00:33:43.100 [2024-12-06 18:31:53.484723] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:33:43.100 [2024-12-06 18:31:53.484837] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Rollback 00:33:43.100 [2024-12-06 18:31:53.484850] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize memory pools 00:33:43.100 [2024-12-06 18:31:53.484861] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 00:33:43.100 [2024-12-06 18:31:53.484871] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:33:43.100 [2024-12-06 18:31:53.484905] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Rollback 00:33:43.100 [2024-12-06 18:31:53.484917] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize superblock 00:33:43.100 [2024-12-06 18:31:53.484931] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 00:33:43.100 [2024-12-06 18:31:53.484941] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:33:43.100 [2024-12-06 18:31:53.484978] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Rollback 00:33:43.100 [2024-12-06 18:31:53.484989] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Open cache bdev 00:33:43.100 [2024-12-06 18:31:53.484998] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 00:33:43.100 [2024-12-06 18:31:53.485008] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:33:43.100 [2024-12-06 18:31:53.485048] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Rollback 00:33:43.100 [2024-12-06 18:31:53.485063] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Open base bdev 00:33:43.100 [2024-12-06 18:31:53.485073] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 00:33:43.100 [2024-12-06 18:31:53.485083] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:33:43.100 [2024-12-06 18:31:53.485197] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl] Management process finished, name 'FTL shutdown', duration = 403.117 ms, result 0 00:33:44.475 18:31:54 ftl.ftl_upgrade_shutdown -- ftl/common.sh@132 -- # unset spdk_tgt_pid 00:33:44.475 18:31:54 ftl.ftl_upgrade_shutdown -- ftl/common.sh@145 -- # rm -f /home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json 00:33:44.475 18:31:54 ftl.ftl_upgrade_shutdown -- ftl/common.sh@194 -- # tcp_initiator_cleanup 00:33:44.475 18:31:54 ftl.ftl_upgrade_shutdown -- ftl/common.sh@188 -- # tcp_initiator_shutdown 00:33:44.475 18:31:54 ftl.ftl_upgrade_shutdown -- ftl/common.sh@181 -- # [[ -n '' ]] 00:33:44.475 18:31:54 ftl.ftl_upgrade_shutdown -- ftl/common.sh@189 -- # rm -f /home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json 00:33:44.475 Remove shared memory files 00:33:44.475 18:31:54 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@15 -- # remove_shm 00:33:44.475 18:31:54 ftl.ftl_upgrade_shutdown -- ftl/common.sh@204 -- # echo Remove shared memory files 00:33:44.475 18:31:54 ftl.ftl_upgrade_shutdown -- ftl/common.sh@205 -- # rm -f rm -f 00:33:44.475 18:31:54 ftl.ftl_upgrade_shutdown -- ftl/common.sh@206 -- # rm -f rm -f 00:33:44.475 18:31:54 ftl.ftl_upgrade_shutdown -- ftl/common.sh@207 -- # rm -f rm -f /dev/shm/spdk_tgt_trace.pid83786 00:33:44.475 18:31:54 ftl.ftl_upgrade_shutdown -- ftl/common.sh@208 -- # rm -f rm -f /dev/shm/iscsi 00:33:44.475 18:31:54 ftl.ftl_upgrade_shutdown -- ftl/common.sh@209 -- # rm -f rm -f 00:33:44.475 ************************************ 00:33:44.475 END TEST ftl_upgrade_shutdown 00:33:44.475 ************************************ 00:33:44.475 00:33:44.475 real 1m28.770s 00:33:44.475 user 2m2.366s 00:33:44.475 sys 0m21.058s 00:33:44.475 18:31:54 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1130 -- # xtrace_disable 00:33:44.475 18:31:54 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@10 -- # set +x 00:33:44.475 18:31:54 ftl -- ftl/ftl.sh@80 -- # [[ 0 -eq 1 ]] 00:33:44.475 18:31:54 ftl -- ftl/ftl.sh@1 -- # at_ftl_exit 00:33:44.475 18:31:54 ftl -- ftl/ftl.sh@14 -- # killprocess 76729 00:33:44.475 18:31:54 ftl -- common/autotest_common.sh@954 -- # '[' -z 76729 ']' 00:33:44.475 18:31:54 ftl -- common/autotest_common.sh@958 -- # kill -0 76729 00:33:44.475 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 958: kill: (76729) - No such process 00:33:44.475 Process with pid 76729 is not found 00:33:44.475 18:31:54 ftl -- common/autotest_common.sh@981 -- # echo 'Process with pid 76729 is not found' 00:33:44.475 18:31:54 ftl -- ftl/ftl.sh@17 -- # [[ -n 0000:00:11.0 ]] 00:33:44.475 18:31:54 ftl -- ftl/ftl.sh@19 -- # spdk_tgt_pid=84223 00:33:44.475 18:31:54 ftl -- ftl/ftl.sh@20 -- # waitforlisten 84223 00:33:44.475 18:31:54 ftl -- ftl/ftl.sh@18 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:33:44.475 18:31:54 ftl -- common/autotest_common.sh@835 -- # '[' -z 84223 ']' 00:33:44.475 18:31:54 ftl -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:33:44.475 18:31:54 ftl -- common/autotest_common.sh@840 -- # local max_retries=100 00:33:44.475 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:33:44.475 18:31:54 ftl -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:33:44.475 18:31:54 ftl -- common/autotest_common.sh@844 -- # xtrace_disable 00:33:44.475 18:31:54 ftl -- common/autotest_common.sh@10 -- # set +x 00:33:44.475 [2024-12-06 18:31:54.957060] Starting SPDK v25.01-pre git sha1 1148849d6 / DPDK 24.03.0 initialization... 00:33:44.475 [2024-12-06 18:31:54.957185] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid84223 ] 00:33:44.746 [2024-12-06 18:31:55.135786] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:33:44.746 [2024-12-06 18:31:55.251541] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:33:45.683 18:31:56 ftl -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:33:45.683 18:31:56 ftl -- common/autotest_common.sh@868 -- # return 0 00:33:45.683 18:31:56 ftl -- ftl/ftl.sh@21 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_nvme_attach_controller -b nvme0 -t PCIe -a 0000:00:11.0 00:33:45.942 nvme0n1 00:33:45.942 18:31:56 ftl -- ftl/ftl.sh@22 -- # clear_lvols 00:33:45.942 18:31:56 ftl -- ftl/common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores 00:33:45.942 18:31:56 ftl -- ftl/common.sh@28 -- # jq -r '.[] | .uuid' 00:33:46.199 18:31:56 ftl -- ftl/common.sh@28 -- # stores=d355aacd-de75-4469-83d1-757ef9c75cb5 00:33:46.199 18:31:56 ftl -- ftl/common.sh@29 -- # for lvs in $stores 00:33:46.199 18:31:56 ftl -- ftl/common.sh@30 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -u d355aacd-de75-4469-83d1-757ef9c75cb5 00:33:46.199 18:31:56 ftl -- ftl/ftl.sh@23 -- # killprocess 84223 00:33:46.199 18:31:56 ftl -- common/autotest_common.sh@954 -- # '[' -z 84223 ']' 00:33:46.199 18:31:56 ftl -- common/autotest_common.sh@958 -- # kill -0 84223 00:33:46.458 18:31:56 ftl -- common/autotest_common.sh@959 -- # uname 00:33:46.458 18:31:56 ftl -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:33:46.458 18:31:56 ftl -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 84223 00:33:46.458 killing process with pid 84223 00:33:46.458 18:31:56 ftl -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:33:46.458 18:31:56 ftl -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:33:46.458 18:31:56 ftl -- common/autotest_common.sh@972 -- # echo 'killing process with pid 84223' 00:33:46.458 18:31:56 ftl -- common/autotest_common.sh@973 -- # kill 84223 00:33:46.458 18:31:56 ftl -- common/autotest_common.sh@978 -- # wait 84223 00:33:48.992 18:31:59 ftl -- ftl/ftl.sh@27 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:33:48.992 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:33:49.252 Waiting for block devices as requested 00:33:49.252 0000:00:11.0 (1b36 0010): uio_pci_generic -> nvme 00:33:49.252 0000:00:10.0 (1b36 0010): uio_pci_generic -> nvme 00:33:49.512 0000:00:12.0 (1b36 0010): uio_pci_generic -> nvme 00:33:49.512 0000:00:13.0 (1b36 0010): uio_pci_generic -> nvme 00:33:54.784 * Events for some block/disk devices (0000:00:13.0) were not caught, they may be missing 00:33:54.784 Remove shared memory files 00:33:54.784 18:32:05 ftl -- ftl/ftl.sh@28 -- # remove_shm 00:33:54.784 18:32:05 ftl -- ftl/common.sh@204 -- # echo Remove shared memory files 00:33:54.784 18:32:05 ftl -- ftl/common.sh@205 -- # rm -f rm -f 00:33:54.784 18:32:05 ftl -- ftl/common.sh@206 -- # rm -f rm -f 00:33:54.784 18:32:05 ftl -- ftl/common.sh@207 -- # rm -f rm -f 00:33:54.784 18:32:05 ftl -- ftl/common.sh@208 -- # rm -f rm -f /dev/shm/iscsi 00:33:54.784 18:32:05 ftl -- ftl/common.sh@209 -- # rm -f rm -f 00:33:54.784 ************************************ 00:33:54.784 END TEST ftl 00:33:54.784 ************************************ 00:33:54.784 00:33:54.784 real 11m3.289s 00:33:54.784 user 13m33.675s 00:33:54.784 sys 1m26.811s 00:33:54.784 18:32:05 ftl -- common/autotest_common.sh@1130 -- # xtrace_disable 00:33:54.784 18:32:05 ftl -- common/autotest_common.sh@10 -- # set +x 00:33:54.784 18:32:05 -- spdk/autotest.sh@346 -- # '[' 0 -eq 1 ']' 00:33:54.784 18:32:05 -- spdk/autotest.sh@350 -- # '[' 0 -eq 1 ']' 00:33:54.784 18:32:05 -- spdk/autotest.sh@355 -- # '[' 0 -eq 1 ']' 00:33:54.784 18:32:05 -- spdk/autotest.sh@359 -- # '[' 0 -eq 1 ']' 00:33:54.784 18:32:05 -- spdk/autotest.sh@366 -- # [[ 0 -eq 1 ]] 00:33:54.784 18:32:05 -- spdk/autotest.sh@370 -- # [[ 0 -eq 1 ]] 00:33:54.784 18:32:05 -- spdk/autotest.sh@374 -- # [[ 0 -eq 1 ]] 00:33:54.784 18:32:05 -- spdk/autotest.sh@378 -- # [[ '' -eq 1 ]] 00:33:54.784 18:32:05 -- spdk/autotest.sh@385 -- # trap - SIGINT SIGTERM EXIT 00:33:54.784 18:32:05 -- spdk/autotest.sh@387 -- # timing_enter post_cleanup 00:33:54.784 18:32:05 -- common/autotest_common.sh@726 -- # xtrace_disable 00:33:54.784 18:32:05 -- common/autotest_common.sh@10 -- # set +x 00:33:54.784 18:32:05 -- spdk/autotest.sh@388 -- # autotest_cleanup 00:33:54.784 18:32:05 -- common/autotest_common.sh@1396 -- # local autotest_es=0 00:33:54.784 18:32:05 -- common/autotest_common.sh@1397 -- # xtrace_disable 00:33:54.784 18:32:05 -- common/autotest_common.sh@10 -- # set +x 00:33:57.326 INFO: APP EXITING 00:33:57.326 INFO: killing all VMs 00:33:57.326 INFO: killing vhost app 00:33:57.326 INFO: EXIT DONE 00:33:57.326 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:33:57.895 0000:00:11.0 (1b36 0010): Already using the nvme driver 00:33:57.895 0000:00:10.0 (1b36 0010): Already using the nvme driver 00:33:57.895 0000:00:12.0 (1b36 0010): Already using the nvme driver 00:33:57.895 0000:00:13.0 (1b36 0010): Already using the nvme driver 00:33:58.465 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:33:58.727 Cleaning 00:33:58.727 Removing: /var/run/dpdk/spdk0/config 00:33:58.727 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-0-0 00:33:58.727 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-0-1 00:33:58.727 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-0-2 00:33:58.727 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-0-3 00:33:58.727 Removing: /var/run/dpdk/spdk0/fbarray_memzone 00:33:58.727 Removing: /var/run/dpdk/spdk0/hugepage_info 00:33:58.727 Removing: /var/run/dpdk/spdk0 00:33:58.727 Removing: /var/run/dpdk/spdk_pid57552 00:33:59.009 Removing: /var/run/dpdk/spdk_pid57798 00:33:59.009 Removing: /var/run/dpdk/spdk_pid58027 00:33:59.009 Removing: /var/run/dpdk/spdk_pid58131 00:33:59.009 Removing: /var/run/dpdk/spdk_pid58187 00:33:59.009 Removing: /var/run/dpdk/spdk_pid58326 00:33:59.009 Removing: /var/run/dpdk/spdk_pid58344 00:33:59.009 Removing: /var/run/dpdk/spdk_pid58554 00:33:59.009 Removing: /var/run/dpdk/spdk_pid58666 00:33:59.009 Removing: /var/run/dpdk/spdk_pid58773 00:33:59.009 Removing: /var/run/dpdk/spdk_pid58895 00:33:59.009 Removing: /var/run/dpdk/spdk_pid59003 00:33:59.009 Removing: /var/run/dpdk/spdk_pid59048 00:33:59.009 Removing: /var/run/dpdk/spdk_pid59084 00:33:59.009 Removing: /var/run/dpdk/spdk_pid59155 00:33:59.009 Removing: /var/run/dpdk/spdk_pid59250 00:33:59.009 Removing: /var/run/dpdk/spdk_pid59710 00:33:59.009 Removing: /var/run/dpdk/spdk_pid59785 00:33:59.009 Removing: /var/run/dpdk/spdk_pid59861 00:33:59.009 Removing: /var/run/dpdk/spdk_pid59881 00:33:59.009 Removing: /var/run/dpdk/spdk_pid60043 00:33:59.009 Removing: /var/run/dpdk/spdk_pid60059 00:33:59.009 Removing: /var/run/dpdk/spdk_pid60213 00:33:59.009 Removing: /var/run/dpdk/spdk_pid60234 00:33:59.009 Removing: /var/run/dpdk/spdk_pid60304 00:33:59.009 Removing: /var/run/dpdk/spdk_pid60327 00:33:59.009 Removing: /var/run/dpdk/spdk_pid60391 00:33:59.009 Removing: /var/run/dpdk/spdk_pid60415 00:33:59.009 Removing: /var/run/dpdk/spdk_pid60613 00:33:59.009 Removing: /var/run/dpdk/spdk_pid60655 00:33:59.009 Removing: /var/run/dpdk/spdk_pid60744 00:33:59.009 Removing: /var/run/dpdk/spdk_pid60938 00:33:59.009 Removing: /var/run/dpdk/spdk_pid61033 00:33:59.009 Removing: /var/run/dpdk/spdk_pid61075 00:33:59.009 Removing: /var/run/dpdk/spdk_pid61549 00:33:59.009 Removing: /var/run/dpdk/spdk_pid61647 00:33:59.009 Removing: /var/run/dpdk/spdk_pid61761 00:33:59.009 Removing: /var/run/dpdk/spdk_pid61815 00:33:59.009 Removing: /var/run/dpdk/spdk_pid61839 00:33:59.009 Removing: /var/run/dpdk/spdk_pid61928 00:33:59.009 Removing: /var/run/dpdk/spdk_pid62572 00:33:59.009 Removing: /var/run/dpdk/spdk_pid62614 00:33:59.009 Removing: /var/run/dpdk/spdk_pid63107 00:33:59.009 Removing: /var/run/dpdk/spdk_pid63210 00:33:59.009 Removing: /var/run/dpdk/spdk_pid63330 00:33:59.009 Removing: /var/run/dpdk/spdk_pid63383 00:33:59.009 Removing: /var/run/dpdk/spdk_pid63414 00:33:59.009 Removing: /var/run/dpdk/spdk_pid63439 00:33:59.009 Removing: /var/run/dpdk/spdk_pid65332 00:33:59.009 Removing: /var/run/dpdk/spdk_pid65479 00:33:59.009 Removing: /var/run/dpdk/spdk_pid65488 00:33:59.009 Removing: /var/run/dpdk/spdk_pid65506 00:33:59.009 Removing: /var/run/dpdk/spdk_pid65546 00:33:59.009 Removing: /var/run/dpdk/spdk_pid65550 00:33:59.009 Removing: /var/run/dpdk/spdk_pid65562 00:33:59.009 Removing: /var/run/dpdk/spdk_pid65611 00:33:59.009 Removing: /var/run/dpdk/spdk_pid65616 00:33:59.009 Removing: /var/run/dpdk/spdk_pid65628 00:33:59.009 Removing: /var/run/dpdk/spdk_pid65673 00:33:59.009 Removing: /var/run/dpdk/spdk_pid65677 00:33:59.009 Removing: /var/run/dpdk/spdk_pid65689 00:33:59.270 Removing: /var/run/dpdk/spdk_pid67111 00:33:59.270 Removing: /var/run/dpdk/spdk_pid67219 00:33:59.270 Removing: /var/run/dpdk/spdk_pid68655 00:33:59.270 Removing: /var/run/dpdk/spdk_pid70415 00:33:59.270 Removing: /var/run/dpdk/spdk_pid70495 00:33:59.270 Removing: /var/run/dpdk/spdk_pid70581 00:33:59.270 Removing: /var/run/dpdk/spdk_pid70694 00:33:59.270 Removing: /var/run/dpdk/spdk_pid70792 00:33:59.270 Removing: /var/run/dpdk/spdk_pid70888 00:33:59.270 Removing: /var/run/dpdk/spdk_pid70973 00:33:59.270 Removing: /var/run/dpdk/spdk_pid71054 00:33:59.270 Removing: /var/run/dpdk/spdk_pid71164 00:33:59.270 Removing: /var/run/dpdk/spdk_pid71261 00:33:59.270 Removing: /var/run/dpdk/spdk_pid71361 00:33:59.270 Removing: /var/run/dpdk/spdk_pid71442 00:33:59.270 Removing: /var/run/dpdk/spdk_pid71523 00:33:59.270 Removing: /var/run/dpdk/spdk_pid71637 00:33:59.270 Removing: /var/run/dpdk/spdk_pid71730 00:33:59.270 Removing: /var/run/dpdk/spdk_pid71831 00:33:59.270 Removing: /var/run/dpdk/spdk_pid71911 00:33:59.270 Removing: /var/run/dpdk/spdk_pid71986 00:33:59.270 Removing: /var/run/dpdk/spdk_pid72096 00:33:59.270 Removing: /var/run/dpdk/spdk_pid72193 00:33:59.270 Removing: /var/run/dpdk/spdk_pid72296 00:33:59.270 Removing: /var/run/dpdk/spdk_pid72378 00:33:59.270 Removing: /var/run/dpdk/spdk_pid72459 00:33:59.270 Removing: /var/run/dpdk/spdk_pid72541 00:33:59.270 Removing: /var/run/dpdk/spdk_pid72619 00:33:59.270 Removing: /var/run/dpdk/spdk_pid72728 00:33:59.270 Removing: /var/run/dpdk/spdk_pid72820 00:33:59.270 Removing: /var/run/dpdk/spdk_pid72920 00:33:59.270 Removing: /var/run/dpdk/spdk_pid73004 00:33:59.270 Removing: /var/run/dpdk/spdk_pid73085 00:33:59.270 Removing: /var/run/dpdk/spdk_pid73160 00:33:59.270 Removing: /var/run/dpdk/spdk_pid73240 00:33:59.270 Removing: /var/run/dpdk/spdk_pid73350 00:33:59.270 Removing: /var/run/dpdk/spdk_pid73441 00:33:59.270 Removing: /var/run/dpdk/spdk_pid73594 00:33:59.270 Removing: /var/run/dpdk/spdk_pid73895 00:33:59.270 Removing: /var/run/dpdk/spdk_pid73936 00:33:59.270 Removing: /var/run/dpdk/spdk_pid74390 00:33:59.270 Removing: /var/run/dpdk/spdk_pid74582 00:33:59.270 Removing: /var/run/dpdk/spdk_pid74682 00:33:59.270 Removing: /var/run/dpdk/spdk_pid74792 00:33:59.270 Removing: /var/run/dpdk/spdk_pid74851 00:33:59.270 Removing: /var/run/dpdk/spdk_pid74876 00:33:59.270 Removing: /var/run/dpdk/spdk_pid75187 00:33:59.270 Removing: /var/run/dpdk/spdk_pid75255 00:33:59.270 Removing: /var/run/dpdk/spdk_pid75346 00:33:59.270 Removing: /var/run/dpdk/spdk_pid75774 00:33:59.270 Removing: /var/run/dpdk/spdk_pid75922 00:33:59.270 Removing: /var/run/dpdk/spdk_pid76729 00:33:59.270 Removing: /var/run/dpdk/spdk_pid76878 00:33:59.270 Removing: /var/run/dpdk/spdk_pid77073 00:33:59.270 Removing: /var/run/dpdk/spdk_pid77183 00:33:59.270 Removing: /var/run/dpdk/spdk_pid77517 00:33:59.270 Removing: /var/run/dpdk/spdk_pid77768 00:33:59.270 Removing: /var/run/dpdk/spdk_pid78133 00:33:59.270 Removing: /var/run/dpdk/spdk_pid78338 00:33:59.270 Removing: /var/run/dpdk/spdk_pid78490 00:33:59.270 Removing: /var/run/dpdk/spdk_pid78554 00:33:59.270 Removing: /var/run/dpdk/spdk_pid78697 00:33:59.530 Removing: /var/run/dpdk/spdk_pid78733 00:33:59.530 Removing: /var/run/dpdk/spdk_pid78797 00:33:59.530 Removing: /var/run/dpdk/spdk_pid79007 00:33:59.530 Removing: /var/run/dpdk/spdk_pid79247 00:33:59.530 Removing: /var/run/dpdk/spdk_pid79667 00:33:59.530 Removing: /var/run/dpdk/spdk_pid80081 00:33:59.530 Removing: /var/run/dpdk/spdk_pid80493 00:33:59.530 Removing: /var/run/dpdk/spdk_pid80976 00:33:59.530 Removing: /var/run/dpdk/spdk_pid81124 00:33:59.530 Removing: /var/run/dpdk/spdk_pid81226 00:33:59.530 Removing: /var/run/dpdk/spdk_pid81822 00:33:59.530 Removing: /var/run/dpdk/spdk_pid81895 00:33:59.530 Removing: /var/run/dpdk/spdk_pid82336 00:33:59.530 Removing: /var/run/dpdk/spdk_pid82699 00:33:59.530 Removing: /var/run/dpdk/spdk_pid83204 00:33:59.530 Removing: /var/run/dpdk/spdk_pid83355 00:33:59.530 Removing: /var/run/dpdk/spdk_pid83408 00:33:59.530 Removing: /var/run/dpdk/spdk_pid83472 00:33:59.530 Removing: /var/run/dpdk/spdk_pid83528 00:33:59.530 Removing: /var/run/dpdk/spdk_pid83592 00:33:59.530 Removing: /var/run/dpdk/spdk_pid83786 00:33:59.530 Removing: /var/run/dpdk/spdk_pid83863 00:33:59.530 Removing: /var/run/dpdk/spdk_pid83924 00:33:59.530 Removing: /var/run/dpdk/spdk_pid84002 00:33:59.530 Removing: /var/run/dpdk/spdk_pid84037 00:33:59.530 Removing: /var/run/dpdk/spdk_pid84104 00:33:59.530 Removing: /var/run/dpdk/spdk_pid84223 00:33:59.530 Clean 00:33:59.530 18:32:10 -- common/autotest_common.sh@1453 -- # return 0 00:33:59.530 18:32:10 -- spdk/autotest.sh@389 -- # timing_exit post_cleanup 00:33:59.530 18:32:10 -- common/autotest_common.sh@732 -- # xtrace_disable 00:33:59.530 18:32:10 -- common/autotest_common.sh@10 -- # set +x 00:33:59.530 18:32:10 -- spdk/autotest.sh@391 -- # timing_exit autotest 00:33:59.530 18:32:10 -- common/autotest_common.sh@732 -- # xtrace_disable 00:33:59.530 18:32:10 -- common/autotest_common.sh@10 -- # set +x 00:33:59.790 18:32:10 -- spdk/autotest.sh@392 -- # chmod a+r /home/vagrant/spdk_repo/spdk/../output/timing.txt 00:33:59.790 18:32:10 -- spdk/autotest.sh@394 -- # [[ -f /home/vagrant/spdk_repo/spdk/../output/udev.log ]] 00:33:59.790 18:32:10 -- spdk/autotest.sh@394 -- # rm -f /home/vagrant/spdk_repo/spdk/../output/udev.log 00:33:59.790 18:32:10 -- spdk/autotest.sh@396 -- # [[ y == y ]] 00:33:59.790 18:32:10 -- spdk/autotest.sh@398 -- # hostname 00:33:59.790 18:32:10 -- spdk/autotest.sh@398 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -c --no-external -d /home/vagrant/spdk_repo/spdk -t fedora39-cloud-1721788873-2326 -o /home/vagrant/spdk_repo/spdk/../output/cov_test.info 00:33:59.790 geninfo: WARNING: invalid characters removed from testname! 00:34:26.340 18:32:35 -- spdk/autotest.sh@399 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -a /home/vagrant/spdk_repo/spdk/../output/cov_base.info -a /home/vagrant/spdk_repo/spdk/../output/cov_test.info -o /home/vagrant/spdk_repo/spdk/../output/cov_total.info 00:34:27.722 18:32:38 -- spdk/autotest.sh@400 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -r /home/vagrant/spdk_repo/spdk/../output/cov_total.info '*/dpdk/*' -o /home/vagrant/spdk_repo/spdk/../output/cov_total.info 00:34:30.253 18:32:40 -- spdk/autotest.sh@404 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -r /home/vagrant/spdk_repo/spdk/../output/cov_total.info --ignore-errors unused,unused '/usr/*' -o /home/vagrant/spdk_repo/spdk/../output/cov_total.info 00:34:32.161 18:32:42 -- spdk/autotest.sh@405 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -r /home/vagrant/spdk_repo/spdk/../output/cov_total.info '*/examples/vmd/*' -o /home/vagrant/spdk_repo/spdk/../output/cov_total.info 00:34:34.215 18:32:44 -- spdk/autotest.sh@406 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -r /home/vagrant/spdk_repo/spdk/../output/cov_total.info '*/app/spdk_lspci/*' -o /home/vagrant/spdk_repo/spdk/../output/cov_total.info 00:34:36.115 18:32:46 -- spdk/autotest.sh@407 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -r /home/vagrant/spdk_repo/spdk/../output/cov_total.info '*/app/spdk_top/*' -o /home/vagrant/spdk_repo/spdk/../output/cov_total.info 00:34:38.649 18:32:48 -- spdk/autotest.sh@408 -- # rm -f cov_base.info cov_test.info OLD_STDOUT OLD_STDERR 00:34:38.649 18:32:48 -- spdk/autorun.sh@1 -- $ timing_finish 00:34:38.649 18:32:48 -- common/autotest_common.sh@738 -- $ [[ -e /home/vagrant/spdk_repo/spdk/../output/timing.txt ]] 00:34:38.649 18:32:48 -- common/autotest_common.sh@740 -- $ flamegraph=/usr/local/FlameGraph/flamegraph.pl 00:34:38.649 18:32:48 -- common/autotest_common.sh@741 -- $ [[ -x /usr/local/FlameGraph/flamegraph.pl ]] 00:34:38.649 18:32:48 -- common/autotest_common.sh@744 -- $ /usr/local/FlameGraph/flamegraph.pl --title 'Build Timing' --nametype Step: --countname seconds /home/vagrant/spdk_repo/spdk/../output/timing.txt 00:34:38.649 + [[ -n 5253 ]] 00:34:38.649 + sudo kill 5253 00:34:38.658 [Pipeline] } 00:34:38.675 [Pipeline] // timeout 00:34:38.680 [Pipeline] } 00:34:38.699 [Pipeline] // stage 00:34:38.704 [Pipeline] } 00:34:38.721 [Pipeline] // catchError 00:34:38.732 [Pipeline] stage 00:34:38.735 [Pipeline] { (Stop VM) 00:34:38.748 [Pipeline] sh 00:34:39.029 + vagrant halt 00:34:41.563 ==> default: Halting domain... 00:34:48.202 [Pipeline] sh 00:34:48.485 + vagrant destroy -f 00:34:51.013 ==> default: Removing domain... 00:34:51.592 [Pipeline] sh 00:34:51.874 + mv output /var/jenkins/workspace/nvme-vg-autotest_2/output 00:34:51.883 [Pipeline] } 00:34:51.897 [Pipeline] // stage 00:34:51.903 [Pipeline] } 00:34:51.917 [Pipeline] // dir 00:34:51.923 [Pipeline] } 00:34:51.937 [Pipeline] // wrap 00:34:51.944 [Pipeline] } 00:34:51.957 [Pipeline] // catchError 00:34:51.966 [Pipeline] stage 00:34:51.968 [Pipeline] { (Epilogue) 00:34:51.982 [Pipeline] sh 00:34:52.266 + jbp/jenkins/jjb-config/jobs/scripts/compress_artifacts.sh 00:34:57.588 [Pipeline] catchError 00:34:57.590 [Pipeline] { 00:34:57.603 [Pipeline] sh 00:34:57.885 + jbp/jenkins/jjb-config/jobs/scripts/check_artifacts_size.sh 00:34:57.885 Artifacts sizes are good 00:34:57.893 [Pipeline] } 00:34:57.908 [Pipeline] // catchError 00:34:57.920 [Pipeline] archiveArtifacts 00:34:57.927 Archiving artifacts 00:34:58.040 [Pipeline] cleanWs 00:34:58.052 [WS-CLEANUP] Deleting project workspace... 00:34:58.052 [WS-CLEANUP] Deferred wipeout is used... 00:34:58.058 [WS-CLEANUP] done 00:34:58.060 [Pipeline] } 00:34:58.076 [Pipeline] // stage 00:34:58.082 [Pipeline] } 00:34:58.096 [Pipeline] // node 00:34:58.102 [Pipeline] End of Pipeline 00:34:58.139 Finished: SUCCESS